Flux:: sound and picture development was founded in the 1990’s during the early days of digital audio software workstations, collaborating with Merging Technologies in the creation of Merging’s now well renowned products.
Close Talker, an Indie Rock band from Saskatchewan in Canada, virtually places their audience in the center of the stage with their silent binaural show Immersive.
The pre-production, live performance and recording of the show was created with the AVID S6L, Pro Tools and the Spat Revolution Immersive Audio Engine, using binaural audio and headphones for the audience.
An Avid S6L console with 2 stage remote I/O’s are handling the inputs and channel processing, while each input channel is sent post fader direct out to two computers (Main and Backup) running the Spat Revolution Immersive Audio Engine.
With Spat Revolution the mixing engineer creates the Spatial scene, the 3D location of the sources, Reverberation, Effects, and generate a binaural spatial audio mix which in the end is returned to the mixing desk and distributed to the audience’s headphones.
The S6L integrates the Spat Revolution using a plugin that allows for all the source parameters (each individual source / object going into the software) to be accessible on the encoders on the console and to be used in snapshots (show automation).
Kellan Thackeray, FOH for Close Talker, had a vision of where he wanted to bring this, and had been experimenting with some technologies to get there.
“Think silent disco but the audience headphone mix is processed with Spat Revolution to give the mix that extra push – this is a seated hardwired show to minimize latency and noise. The artistic intent of making these shows a communal experience, and knowing I had the technology and a way to do this, got me hooked right away on the project.”
The audience’s headphone mix is created with cutting edge binaural mixing technology, live in real time. The sound fully immerses the audience with instrumentation and voices orbiting and swirling around them on all axes, increasing the impact and depth of the band’s already acclaimed live show.
“Like anything new, the question was how it would Integrate in the creation to delivery workflow that became key. The possibility was given to the band to start the creation in their DAW environment, experiment and prepare some rough spatial mix concepts positioning and setting properties of source / objects in the space. This was done as they were monitoring in various formats or with different binaural HRTFs. “
As the time arrived for the actual pre-production and band rehearsal, the source / object parameters were captured with the snapshots of the S6L system as a starting point, then the programming of show snapshots was done. A Pro Tools system handles the recording of every show, and the ability to do a virtual production (virtual soundcheck) for refining the mixes.
The post Close Talker – Immersion: A silent concert in binaural audio appeared first on Flux::.Read More
Kim Planert, Los Angeles, USA, originally from Germany, award winning composer and producer working with writing and scoring for TV and Film, with over 230 episodes of prime-time television shows, feature films and indie features.
“Currently I’m scoring a documentary about the Planetary Society’s Lightsail Mission with the “Science Guy” Bill Nye. The LightSail Satellite is a crowdfunded solar sail project that aims to become the first spacecraft in Earth orbit propelled solely by sunlight.”
“Another project I’m working on right now is the writing of new neo-classic music for string ensemble, which will first be heard live in intimate concerts in LA.”
“I was involved as an associate producer, and I wrote the score for passion project” Skid Row Marathon”, an award winning feature length documentary that follows the inspiring story of a Los Angeles judge who starts a running club on Skid Row, giving members a second chance at life as they battle addictions. “
“Pure Analyzer is always integral to my process from the beginning of a composition to the master, and as I always work in 5.1, the Nebula color coding does a great job telling me exactly what’s going on in the space, there’s no other tool giving that much insight in one single window.”
Prior to Los Angeles, Kim collaborated with several legendary music producers and musicians in the UK including Craig Armstrong, Capercaillie, Secret Garden, John McLaughlin (FIVE), BBC Scottish Symphony Orchestra and the Scottish Ensemble.
As a UK sound engineer and producer he recorded and mixed more than 40 albums and soundtracks for several films including American Cousins, The Bone Collector, and the TV show Crowdie + Cream – won a BAFTA award for “Best Soundtrack” and “Best Theme”.
Recently Kim released his first solo album SKYLIGHT inspired by his love for wing suiting and skydiving.
“An ambient, spiritual hybrid sound mixing orchestra with synthesizer, melodic string lines reminiscent of Elgar with the minimalism and emotional depth of Craig Armstrong and Michael Nyman, combined with the cutting edge synth programing of Daft Punk and epic synthpad’s a la Vangelis, bridging the gap between song and film music.”
Skylight – Notes from a Logbook: smarturl.it/d0hwty
Composer, Sound engineer, owner of St. Mary’s Space, recording studio and arts space in the magical West Highlands of Scotland.
My work happens at the intersection between experimentalism, improvisation and left-field rock and pop, which has allowed me to collaborate with many amazing artists, musicians, filmmakers, dancers and performers, including Akram Khan, Gavin Bryars, Kathryn Joseph and Misterlee.
For the last few years I have been working almost exclusively ambisonically, that is to say, recording, synthesizing and mainly authoring into a spherical soundfield, which in the end is decoded for the required speaker layout.
There are very few tools for this kind of pioneering work, allowing for the 16 channel wide tracks required by Third Order Ambisonic, Evo Channel is one of the best, allowing easy manipulation of several key components of a sound source; EQ, dynamics, and maintaining the vital phase relationship between channels, in one simple to use interface.
I use Evo Channel a lot and I am generally impressed with the sound quality and workflow, it’s been invaluable for me on my current project, ‘Notes from a Tremulous Hand’, a sonic novel full of grotesque characters and mysterious sonic anomalies, ‘pataphysics’ and the long neglected theories of Marconi -listen out.
Notes from a Tremulous Hand is available soon across several formats including an immersive binaural version (with or without head-tracking).Read More
Iconic audio designer Andrew Roth of San Francisco based Roth Audio Design has been creating immersive sound experiences for over two decades. His work runs the audio gamut. Working hand in hand with museums and theme parks, he designs, builds and installs immersive systems that create a life-changing audio experience. He also creates, records and mixes the audio content for these attractions. Beyond the museum and theme park work, he is often involved in audio work for film, television, radio, games, and web media where he often finds himself working in the rapidly growing field of immersive 3D audio experiences including interactive AR (augmented reality) and VR (virtual reality) as well as object oriented mixing. He provides sound design and mix services for all audio configurations including stereo, binaural, Ambisonic, 5.1, 7.1, and 7.1.4. With such a wide variety of audio work, it’s no surprise that SPAT Revolution is an integral part of his workflow.
SPAT Revolution, currently the most comprehensive real-time 3D-audio mixing engine available, provides its users the ability to control the position and add room effects to an audio-source in a three-dimensional sonic atmosphere. The results can be sent to a loudspeaker configuration or a pair of normal stereophonic headphones or the results can be exported as audio stems to be used within other audio engines such as Pro Tools or Cubase.
In discussing his current work, Mr. Roth explains, “It’s all over the map. There’s a lot of post-production. I’m in a studio where, you know, I do a lot of film and television. I used to do more games and things like that but I’ve gotten a lot more into installation work like theme parks, exhibits, museums and a lot into the binaural world as well which I’ve been interested in at least twenty years.” Roth actually began using an early version of SPAT back in the 90’s when a computer that was considered state of the art at the time had to struggle to keep up with the software application.
After pouring his heart into an audio design project making sure that every sound is exactly right and the mix has been meticulously tweaked, it only makes sense that he sees the project through the installation ensuring that there is no compromise made along the way even if that means disguising speakers in plants and/or burying cables in the installation. “While I like the comfort of the studio and the family life it affords me, I’ve found myself enjoying getting down and dirty. Some of the places I’ve been doing exhibits involve plants and dirt, you know, in the ground kind of cable running so that’s a nice change from the clean studio.” He continues, “With installations, one thing I found is when I would do audio for say a small theme park or something, I didn’t have any say in what I was designing for. They would say well this is the equipment, this is where the speakers are, these are the limitations. I’m still the sound designer at heart, but I like being able to bring the hardware installation as well because I can put in exactly what I know I want from beginning to end.”
Mr. Roth’s lifelong love of audio began as a child. He explains, “I’ve been interested in audio and sound in some form since before I was ten years old. There was a reel-to-reel machine in our house, in the basement, that was unused with a bunch of tape that was unused, stuff my father had bought before he had a kid and it had just been abandoned in the basement. So when I was nine or ten years old I started playing with that. I guess the immersive part just led me into the natural sounds world. I would stick friends and family members in a self-made sound chamber out of a tent and blankets with multiple speakers and I would play things like natural environments on records and cassettes and pump in music and just try to create environments in the dark.”
College paved the way for Mr. Roth’s passion for audio to expand, “It’s funny because I went to college at Oberlin, which is a conservatory and a regular liberal arts college in the Midwest so I could be near music and I studied eco-musicology and history and stuff like that. Sound was always just something I was sort of doing. I didn’t study it specifically.” A college internship at Earwax Productions (a sound design studio) eventually led to full time employment. Mr. Roth remained on board with Earwax for about five years until the dot com bust in 2001 or so and that was the birthing point of Roth Audio Design.
SPAT has become a necessity for Mr. Roth’s workflow as it allows him to audition his work in progress before actual installation has even begun. Wearing headphones for extended lengths of time can become fatiguing. SPAT resolves this by allowing Mr. Roth to recreate a binaural sound space in his 7.1.4 room recreating the binaural experience without wearing headphones allowing him to accurately mix for a binaural experience with or without headphones.
Mr. Roth is currently working on an installation in Golden Gate Park for the holidays that incorporates over seventy independent channels of audio across six rooms. SPAT allows Mr. Roth to virtually build each of the six rooms to utilize during the design process. “I can set up each of these rooms and can demo the design as it comes along. I can bring in the director of the museum and they can come to the studio and get an idea about the sound before it’s finished.” He goes on to say, “In those places where I recreate rainforests there are speakers in the ceiling, there are speakers in the floor for frogs and in the floor and birds in the ceiling and things around the horizon level. So I can demo that in the studio.”
Mr. Roth describes his utilization of SPAT Revolution, “What I’ve been hired to do a lot of is to do these immersive, very detailed and accurate historical recreations. I call them ‘audio-archaeology,’ where I try to take all the elements and put them together into a 3-D environment and I’ve done that with other binaural programs over the years. One thing that’s really cool about working with SPAT, and I’m about to do this for a museum in Sydney, Australia, is that it’s not bound to binaural. If I create a really detailed 3D recreation of say, San Francisco in 1906 with cable cars and all that, in the past it was a real pain to try and take that session then say remix it for a multichannel installation. Whereas now I kind of have this 3D environment and it’s not bound to its end mix. I can repurpose it much more easily.”The Enchanted Domain
from a San Francisco Museum of Modern Art show on the painter Magritte (This audio clip is encoded in Binaural and requires headphones)https://www.flux.audio/wp-content/uploads/2019/10/9_The_Enchanted_Domain.mp3
SPAT Revolution’s flexible and powerful feature set help keep Andrew Roth and Roth Audio Design at the top of their field. Keep an eye on what they’ll do next at http://rothaudiodesign.com.
#Empowercreativity #SpatRevolution #ImmersiveRead More
From today all our software is supporting 2 simultaneous iLok authorizations, this includes all current existing licenses as well.
To access the second authorization for your already existing license, simply open the iLok license manager and click on the license, should the second authorization not be available, you then may need to deactivate the license, and then activate it again, then the second authorization will be available.Read More
Together with the amazing audio-team; Patrice Langlois, Nicolas Michel and Emmanuel Puginier, Sound designer Jean-Michel Caron kicks off the world premiere of Cirque’s new production Bazzar in Mumbai, India, running there until December 9th, and in Delhi between December 25th and January 6th.
BAZZAR is an eclectic lab of infinite creativity where a joyful troupe of acrobats, dancers and musicians craft an mind-blowing spectacle. Lead by their maestro, they band together to invent a whimsical one-of-a-kind universe. In a place where the unexpected is expected, the colorful group reimagines, rebuilds and reinvents vibrant scenes in an artistic, acrobatic game of order and disorder.
Avid VENUE S6L and Flux:: Spat Revolution Immersive audio engine are proudly part of this fabulous tour, and the BAZZAR production is comprised of Dual Avid S6L Shared IO system with 24 fader control surface together with Avid E6L-144 engine and remote Stage 16 I/O boxes distributed in strategic locations. Spat Revolution is handling multi virtual room for various artistic intend.
“The use of Spat Revolution for BAZZAR allowed us to integrate immersive audio content in the show, with travelling audio sources, and at a fraction of what we normally would need to invest to achieve something like this.”, says Jean-Michel Caron.
A single virtual room using VBAP panning method is handling 5.1 surround beds delivered to 3 audience zones that gets immersed in the show. A secondary ‘’room’’ is using a selection of loudspeaker (selected arrangement) for the all-around surround experience for virtual source movements and artificial reverberation.
A third ‘’room’’ using all available loudspeakers in a arbitrary configuration is as well deployed with KNN panning method to provide the ability to position virtual sources anywhere in the space while keeping the ability to define how many virtual speakers each virtual sources will spread to.
“Spat Revolution does require experimentation time with the various possible source and room parameters and its flexible configuration, but the results we are able achieve using this makes it entirely worthwhile.”, says JMC
Primary and Secondary Mac mini computers are used to execute QLab Cue System and Spat Revolution software on each machines via RME Madi interfaces. While QLab audio cues are played back to console and OSC network cues to Spat , console is sending audio sources to Spat Revolution for real time immersive audio processing and rendered output of the speaker arrangements are then returned to console for distribution to the various FOH Matrix outputs and the various loudspeakers.
“The addition of Spat Revolution in the audio chain was totally transparent, and I am definitely ready to push this integration further on my future projects”, says JMC
While AVB audio is used for the Stage I/O distribution and the integration of Pro Tools in 128ch mode integration for virtual soundcheck and recordings, MADi interfaces are used on the consoles to patch to the expanded network.
The expanded distribution and routing is delivered via Auvitran AVB Toolboxes frames where AES3, Dante and MADI cards are used. Various AVB Toolbox are spread out the big top.
The post Spat Revolution with Jean-Michel Caron in Cirque’s new production – BAZZAR appeared first on Flux::.Read More
On November 27th and 28th, together with Flux:: French partner Freevox, the Flux:: Spat Revolution team makes a stop in Paris for a series of presentations.
We welcome you all to join us and participate in our presentations of the Spat Revolution Immersive Audio Engine. Daily presentations on the 27th and 28th at 11.00 and 15.00.
Furthermore, in collaboration with SAE Paris, AES France and Merging Technologies, Flux:: is participating and presenting in a discussion about the latest audio tools and workflows for live shows – Audio for live shows : November 27th at 19.00.
Audio for live shows – New needs, new tools, new applications
The live show is constantly evolving with new challenges for the stage setup, made possible by the contribution of sound engineers and new technology.
19.00 – Opening
19:30 – Presentation of Ovation and the workflow of theme parks, by Maurice Engler – Merging Technologies
20:15 – Presentation of Spat Revolution in live event environments, content creation and its usage in large scale live productions, by Hugo Larin – Flux:: Audio
21.00 – Ending
Audio for live shows – Registration
The 2018 edition of JTSE will be held at Pharmacy Porte de la Chapelle, Paris (Subway “Porte de la Chapelle”), with over 140 national and international exhibitor companies from the main areas of stage technologies presenting their unique know-how, within; machinery, lighting, sound, stagecraft, rigging, fabrics, seating, accessories, safety gear and training.Read More
Did you ever wonder about the processing behind the Spat Revolution Immersive Audio Engine? How latency is handled, or how Spat is handling speaker alignment to achieve smooth spatialization? Or are you interested in getting to know more about how Live Theatrical show setups can be done using Spat Revolution?
A blog read by Hugo Larin, Business Development for Spat Revolution
POWERING THE SPAT ENGINE, THE AUDIO INTERFACE AND THE SYSTEM LATENCY
Spat Revolution is a real time stand-alone software application for processing immersive audio, and regarding the processing behind it; although some immersive audio solutions will come with dedicated hardware with fix capacities, Spat doesn’t run on any specific hardware. The power of today’s modern computers have proven us what can be achieved, instead Windows and MacOS is supported, with hardware recommendations provided, a solid graphic card, sufficient amount of ram and multi core processing, in order to achieve good results.
The ability to run a generic hardware also means that a vast pool of audio interfaces are available for the system setup. You could obviously then feel like asking what the actual latency of the system is. In Spat Revolution the latency is defined by the audio hardware components (local audio interface or network AVB, Dante or AES67 virtual audio entities) and the buffer size. The system actually shows the total latency in a status window in the software, combining the OS reported latency of the audio component and the Spat buffer setting. The actual Spat setup and configuration (number of sources, rooms and such) don’t have impact on the latency. It’s predictable and fixed.
Figure 1: Hardware IO and Devices in Spat Revolution
When latency is critical we do have the option of a very low buffer size setting (again, taken the hardware qualification into account). Spat has been tested for In Ear Monitor scenarios, so this is a good example of how low latency can be achieved, as we know latency will be critical if delivering binaural content to a vocalist or musician.
Figure 2: Latency report in Spat Software
Multiple Rooms, Panning Techniques, Dealing with Speaker Setup for Live
SPEAKER ARRANGEMENT ALIGNMENT PROVIDED IN EACH SPAT ROOM
Although Spat is not intended to replace the loudspeaker management for tuning, it does offer the ability to compute each speaker output gain and delay in order to compensate for the location compromises that may need to be done because of physical limitations. The auto compute will make a delay and gain calibration to the central ref. point, when trying to provide extremely smooth transition and when your sources are actually moving in the soundscape all the time. Note that the actual panning computing will use the compensated speakers, the Virtual Speaker, resulting in very smooth feeling when moving the sources.
Figure 3: The Speaker Config window and the ability to ”Computer” Gain and Delay.
See physical location of speaker in grey and the virtual speaker in orange
HOW USERS ARE DEPLOYING SPAT WITHIN LIVE AND THE ‘’VIRTUAL ROOM’’ SPEAKER ARRANGEMENTS
Spat Revolution is becoming more and more used in live theatrical shows, and many of these systems have common workflows. QLab is often used as the playback and show control system, while for live sound console the Avid VENUE S6L is often the console of choice. The Spat Send plugin is used, for example, in these scenarios to provide the mixing engineer with the ability to access the source parameters on the control surface, and for automating source parameter changes via the snapshot system. Something that can be done using generic OSC commands with other mixing system providing that ability, such as the Digico SD consoles.
That said, a common scenario is, using the QLab network cues to change any parameters of sources, rooms and reverb in Spat. This includes making 2D moves of sources in the soundscape. QLab is very common for show control, where their network and midi cues gets used to drive a variety of systems simultaneously. An interesting point of using QLab and Spat in the off location creation process is how you can start the show creation and move to separate computers and add audio mixing console in the equation.
The subject of remote control in itself will deserve a new blog!
The common integration varies from using 32, 64 or more console post fader channel audio feeds sent via MADI or Network audio to Spat, feeding the sources. Speaker outputs then either get returned to the desk in order to feed the various matrix of the mixing system, or are picked up directly by MADI or Network audio and sent to the loudspeaker management system.
In a recent big top production, Spat was dealing with a circular stage positioned at the center of the four big top masts. The audience was sitting on three of the faces, while the last face was the actual backstage. Each of the three audience spaces where covered from a L and R speaker cluster position on the mast, while a third speaker position was in the center at a higher elevation. A single behind stage L and R speaker cluster position was used as well (L and R when facing the main front face).
All around the big top was 6 rear ‘’surround’’ speakers, that were positioned in a way that each pair of 2 speakers where covering an audience space as rear speakers.
The multi-room concept was deployed for this. A first virtual room consisting of 5 outputs (L, C, R and 2 rears) similar to a surround was used to create a multichannel bed that was delivered to the 3 audience spaces. This allowed for delivery of an immersive base mix to each of the spaces using a VBAP panning technique.
While the rears were used for the base surround bed, a separate virtual Virtual room using the 6 surround speakers was deployed in order to be able to do rear effects to the complete audience, for example, when wanting to spin a sound around the big top. This virtual room included as well the backstage L and R speakers and the gain and delay alignment of Spat aligned all speakers to their virtual position. The key of adding the two backstage speakers was to give a feeling of sound coming from behind the stage or spinning behind the stage, and again, a VBAP panning technique was used for this virtual room.
Another Spat Room was using all single and cluster loudspeakers, this time with a KNN panning technique in order to allow to position a source anywhere in the audience, in order to really give the feeling that source was emanating from where the designer wanted or where the action was.
To complement this, some sources were able to be patched in a binaural room which became a feed that could be sent to recorders and to provide an spatial experience on headphones.
Figure 4: All Speaker with a KNN Pannings
Figure 5: 5.1 Soundscape deliver to multi zones
Figure 6: Various different Speaker Arrangements and Panning Types used simultaneously in one and the same project
Another theatrical project include the strategy of sending the 12 artist microphone post fader feeds to a Room for doing a real time tracking of these artists (actors on stage). Spat’s ability to integrate with the tracking system allowed for the attachment of tracking devices (know as beacons) to source actors. For this a DBAP panning technique was chosen, as no assumption could be made as too where the audience would be sitting and a central listening point didn’t exist. This provided a good signal distribution while offering some localisation, as the actors were being tracked on this very wide stage. A portion of the speaker rig was used for this one, where the 7 front clusters and their delays were used only. Another Room for a different effect and using a different set of loudspeakers was used too, which received sources from a fix number of console aux buses (send buses to the SPAT immersive audio software)
Interested in this conversation? Follow our different blog articles!
Stay tuned – Subscribe to our newsletter
A blog read by Hugo Larin, Business Development for Spat Revolution
When you find yourself working on a custom multi-channel speaker arrangement, the Speaker Configuration editor is where a model of the sound diffusion system can be defined and stored into the list of speaker configuration presets. From the below image you can see that the current config (a system predefined config) can be, a Duplicate (a copy will be generated for editing) or a New config can be created.
Figure 1: The speaker configuration window showing a pre-defined 13.1 Auro 3D speaker arrangement
Managing the Speaker Configuration includes the ability to delete a config, rename a config, export configuration(s) to a file, or import configuration(s) from a file. Note that Spat Revolution’s pre-defined speaker arrangements can’t be deleted or renamed, but duplicating them (making a copy) will allow you to edit the arrangement thus starting from an existing configuration. The Normalize feature is there to rapidly scale down the speaker arrangement to have the furthest away speaker distance set to 2M only. This helps reduce the virtual room environment size to facilitate working with the parameters range when working with very large speaker setups.
Figure 2: Editing a speaker configuration showing a copy of a 13.1 Auro 3D speaker arrangement
Once editing a speaker configuration, you can either + Add, – Del, Move Up or Move Down speakers in the list. Note that the total number of channels in your arrangement is denoted above the list. Your speaker system contains a Low Frequency LFE channel where you want the ability to send audio to it like on an aux system? Simply adding a channel (or channels), called LFE, will do the magic for you here directly. This particular channel won’t be fed from the virtual room panning, but by the LFE Send on each of the sources that will be available on rooms containing an LFE. Obviously without the room reverberation.
Position information of the loudspeaker can be entered as X, Y, Z in meters or with Azimuth degree, Elevation degree and Distance in meters. Delay and Gain can be used to manually align the physical speaker location to a virtual one, essentially creating a virtual speaker.
Wondering what the Compute and Reset is all about? Spat Revolution can use the measurements you put into that speaker arrangement to calculate (and apply) the delays and gains for rendering spatial audio on a speaker physical configuration that may not have speakers located in ideal locations. This is a technique preconized when using panning methods that are sweet spot centric such as VBAP and VBIP. The methods will provide very smooth panning on arrangements that have all speaker equidistant to the optimum listening position. So this is what the compute will do for you when desired and using such panning method. You don’t need to move the speakers, Compute the delays and gain for optimal panning experience. Note that it is preferable to do this in Spat Revolution instead of external processing as Spat will use the computed speaker locations (the virtual speakers) for actually spatializing afterward. This is an advanced speaker management technique made easily accessible by a single press of the Compute Speaker Alignment button.
In some cases, creating speaker setups in an editor is not the most efficient way, primarily when such information is available as a list and was exported by an acoustic and design simulation software like those used with loudspeaker companies. So the speaker config work was already done onced. Although an advanced technique to be used with precaution, a Custom Speaker Arrangement file template is available from the Flux:: Support team that can allow you to enter your Speaker arrangement configuration along with the different speakers location in it. This methods allows to input speaker configuration (FromCartesian) raw data, Polar (FromSpherical) data, or by creating speaker layers with specific angles. All this can be quite practical for some larger more complex setups.
Interested in this conversation?
Follow our news and blogs
Stay tuned – Subscribe to our newsletter
The Tonmeistertagung meet-up in Cologne, Germany, is a yearly congress and exhibition providing an extensive overview of the latest trends in the pro audio industry, gathering leading experts and professionals from the; broadcast, recording, theatre, film and television industry to share their experiences.
This year Flux:: is represented by the live sound engineer & S.E.A product specialist; Jonas Gehrmann, talking about the possibilities Immersive Audio currently offers, and how to create and build an Immersive sound experience for the audience using the AVID VENUE | S6L console and the Spat Revolution Immersive Audio Engine.
The Immersive Audio demonstrations are taking place in the AVID & S.E.A demo room; 2-Demo-H, setup for full on immersive audio integration demonstrations with the AVID S6L.
Daily 11.00, 13.00, 15.00 & 17.00 (Saturday only 11.00 & 13.00)
For the second time, the Student 3D Audio Competition for high-quality productions in Ambisonics 3D Audio is facilitated by the Tonmeistertagung. During the show, the jury presents the nominee productions in an ideal 3D audio listening environment, announce the winners, and hand out the awards.
Student 3D Audio Competition 2018
The post Tonmeistertagung Meet-up in Cologne – November 14-17th appeared first on Flux::.Read More
Flux:: sound and picture development was founded in the 1990’s during the early days of digital audio software workstations, collaborating with Merging Technologies in the creation of Merging’s now well renowned products.
In 2007 Flux:: started releasing their own exquisite audio software product line tailored for demanding sound engineers, and has since then been focused on creating intuitive and technically innovative audio software tools, used by sound engineers and producers in the professional audio, broadcast, post production and mastering industry all over the world.