Flux:: sound and picture development was founded in the 1990’s during the early days of digital audio software workstations, collaborating with Merging Technologies in the creation of Merging’s now well renowned products.
Iconic audio designer Andrew Roth of San Francisco based Roth Audio Design has been creating immersive sound experiences for over two decades. His work runs the audio gamut. Working hand in hand with museums and theme parks, he designs, builds and installs immersive systems that create a life-changing audio experience. He also creates, records and mixes the audio content for these attractions. Beyond the museum and theme park work, he is often involved in audio work for film, television, radio, games, and web media where he often finds himself working in the rapidly growing field of immersive 3D audio experiences including interactive AR (augmented reality) and VR (virtual reality) as well as object oriented mixing. He provides sound design and mix services for all audio configurations including stereo, binaural, Ambisonic, 5.1, 7.1, and 7.1.4. With such a wide variety of audio work, it’s no surprise that SPAT Revolution is an integral part of his workflow.
SPAT Revolution, currently the most comprehensive real-time 3D-audio mixing engine available, provides its users the ability to control the position and add room effects to an audio-source in a three-dimensional sonic atmosphere. The results can be sent to a loudspeaker configuration or a pair of normal stereophonic headphones or the results can be exported as audio stems to be used within other audio engines such as Pro Tools or Cubase.
In discussing his current work, Mr. Roth explains, “It’s all over the map. There’s a lot of post-production. I’m in a studio where, you know, I do a lot of film and television. I used to do more games and things like that but I’ve gotten a lot more into installation work like theme parks, exhibits, museums and a lot into the binaural world as well which I’ve been interested in at least twenty years.” Roth actually began using an early version of SPAT back in the 90’s when a computer that was considered state of the art at the time had to struggle to keep up with the software application.
After pouring his heart into an audio design project making sure that every sound is exactly right and the mix has been meticulously tweaked, it only makes sense that he sees the project through the installation ensuring that there is no compromise made along the way even if that means disguising speakers in plants and/or burying cables in the installation. “While I like the comfort of the studio and the family life it affords me, I’ve found myself enjoying getting down and dirty. Some of the places I’ve been doing exhibits involve plants and dirt, you know, in the ground kind of cable running so that’s a nice change from the clean studio.” He continues, “With installations, one thing I found is when I would do audio for say a small theme park or something, I didn’t have any say in what I was designing for. They would say well this is the equipment, this is where the speakers are, these are the limitations. I’m still the sound designer at heart, but I like being able to bring the hardware installation as well because I can put in exactly what I know I want from beginning to end.”
Mr. Roth’s lifelong love of audio began as a child. He explains, “I’ve been interested in audio and sound in some form since before I was ten years old. There was a reel-to-reel machine in our house, in the basement, that was unused with a bunch of tape that was unused, stuff my father had bought before he had a kid and it had just been abandoned in the basement. So when I was nine or ten years old I started playing with that. I guess the immersive part just led me into the natural sounds world. I would stick friends and family members in a self-made sound chamber out of a tent and blankets with multiple speakers and I would play things like natural environments on records and cassettes and pump in music and just try to create environments in the dark.”
College paved the way for Mr. Roth’s passion for audio to expand, “It’s funny because I went to college at Oberlin, which is a conservatory and a regular liberal arts college in the Midwest so I could be near music and I studied eco-musicology and history and stuff like that. Sound was always just something I was sort of doing. I didn’t study it specifically.” A college internship at Earwax Productions (a sound design studio) eventually led to full time employment. Mr. Roth remained on board with Earwax for about five years until the dot com bust in 2001 or so and that was the birthing point of Roth Audio Design.
SPAT has become a necessity for Mr. Roth’s workflow as it allows him to audition his work in progress before actual installation has even begun. Wearing headphones for extended lengths of time can become fatiguing. SPAT resolves this by allowing Mr. Roth to recreate a binaural sound space in his 7.1.4 room recreating the binaural experience without wearing headphones allowing him to accurately mix for a binaural experience with or without headphones.
Mr. Roth is currently working on an installation in Golden Gate Park for the holidays that incorporates over seventy independent channels of audio across six rooms. SPAT allows Mr. Roth to virtually build each of the six rooms to utilize during the design process. “I can set up each of these rooms and can demo the design as it comes along. I can bring in the director of the museum and they can come to the studio and get an idea about the sound before it’s finished.” He goes on to say, “In those places where I recreate rainforests there are speakers in the ceiling, there are speakers in the floor for frogs and in the floor and birds in the ceiling and things around the horizon level. So I can demo that in the studio.”
Mr. Roth describes his utilization of SPAT Revolution, “What I’ve been hired to do a lot of is to do these immersive, very detailed and accurate historical recreations. I call them ‘audio-archaeology,’ where I try to take all the elements and put them together into a 3-D environment and I’ve done that with other binaural programs over the years. One thing that’s really cool about working with SPAT, and I’m about to do this for a museum in Sydney, Australia, is that it’s not bound to binaural. If I create a really detailed 3D recreation of say, San Francisco in 1906 with cable cars and all that, in the past it was a real pain to try and take that session then say remix it for a multichannel installation. Whereas now I kind of have this 3D environment and it’s not bound to its end mix. I can repurpose it much more easily.”The Enchanted Domain
from a San Francisco Museum of Modern Art show on the painter Magritte (This audio clip is encoded in Binaural and requires headphones)https://www.flux.audio/wp-content/uploads/2019/05/The_Enchanted_Domain-from-a-San-Francisco-Museum-of-Modern-Art-show-on-the-painter-Magritte..mp3
SPAT Revolution’s flexible and powerful feature set help keep Andrew Roth and Roth Audio Design at the top of their field. Keep an eye on what they’ll do next at http://rothaudiodesign.com.
#Empowercreativity #SpatRevolution #ImmersiveRead More
From today all our software is supporting 2 simultaneous iLok authorizations, this includes all current existing licenses as well.
To access the second authorization for your already existing license, simply open the iLok license manager and click on the license, should the second authorization not be available, you then may need to deactivate the license, and then activate it again, then the second authorization will be available.Read More
Together with the amazing audio-team; Patrice Langlois, Nicolas Michel and Emmanuel Puginier, Sound designer Jean-Michel Caron kicks off the world premiere of Cirque’s new production Bazzar in Mumbai, India, running there until December 9th, and in Delhi between December 25th and January 6th.
BAZZAR is an eclectic lab of infinite creativity where a joyful troupe of acrobats, dancers and musicians craft an mind-blowing spectacle. Lead by their maestro, they band together to invent a whimsical one-of-a-kind universe. In a place where the unexpected is expected, the colorful group reimagines, rebuilds and reinvents vibrant scenes in an artistic, acrobatic game of order and disorder.
Avid VENUE S6L and Flux:: Spat Revolution Immersive audio engine are proudly part of this fabulous tour, and the BAZZAR production is comprised of Dual Avid S6L Shared IO system with 24 fader control surface together with Avid E6L-144 engine and remote Stage 16 I/O boxes distributed in strategic locations. Spat Revolution is handling multi virtual room for various artistic intend.
“The use of Spat Revolution for BAZZAR allowed us to integrate immersive audio content in the show, with travelling audio sources, and at a fraction of what we normally would need to invest to achieve something like this.”, says Jean-Michel Caron.
A single virtual room using VBAP panning method is handling 5.1 surround beds delivered to 3 audience zones that gets immersed in the show. A secondary ‘’room’’ is using a selection of loudspeaker (selected arrangement) for the all-around surround experience for virtual source movements and artificial reverberation.
A third ‘’room’’ using all available loudspeakers in a arbitrary configuration is as well deployed with KNN panning method to provide the ability to position virtual sources anywhere in the space while keeping the ability to define how many virtual speakers each virtual sources will spread to.
“Spat Revolution does require experimentation time with the various possible source and room parameters and its flexible configuration, but the results we are able achieve using this makes it entirely worthwhile.”, says JMC
Primary and Secondary Mac mini computers are used to execute QLab Cue System and Spat Revolution software on each machines via RME Madi interfaces. While QLab audio cues are played back to console and OSC network cues to Spat , console is sending audio sources to Spat Revolution for real time immersive audio processing and rendered output of the speaker arrangements are then returned to console for distribution to the various FOH Matrix outputs and the various loudspeakers.
“The addition of Spat Revolution in the audio chain was totally transparent, and I am definitely ready to push this integration further on my future projects”, says JMC
While AVB audio is used for the Stage I/O distribution and the integration of Pro Tools in 128ch mode integration for virtual soundcheck and recordings, MADi interfaces are used on the consoles to patch to the expanded network.
The expanded distribution and routing is delivered via Auvitran AVB Toolboxes frames where AES3, Dante and MADI cards are used. Various AVB Toolbox are spread out the big top.
The post Spat Revolution with Jean-Michel Caron in Cirque’s new production – BAZZAR appeared first on Flux::.Read More
On November 27th and 28th, together with Flux:: French partner Freevox, the Flux:: Spat Revolution team makes a stop in Paris for a series of presentations.
We welcome you all to join us and participate in our presentations of the Spat Revolution Immersive Audio Engine. Daily presentations on the 27th and 28th at 11.00 and 15.00.
Furthermore, in collaboration with SAE Paris, AES France and Merging Technologies, Flux:: is participating and presenting in a discussion about the latest audio tools and workflows for live shows – Audio for live shows : November 27th at 19.00.
Audio for live shows – New needs, new tools, new applications
The live show is constantly evolving with new challenges for the stage setup, made possible by the contribution of sound engineers and new technology.
19.00 – Opening
19:30 – Presentation of Ovation and the workflow of theme parks, by Maurice Engler – Merging Technologies
20:15 – Presentation of Spat Revolution in live event environments, content creation and its usage in large scale live productions, by Hugo Larin – Flux:: Audio
21.00 – Ending
Audio for live shows – Registration
The 2018 edition of JTSE will be held at Pharmacy Porte de la Chapelle, Paris (Subway “Porte de la Chapelle”), with over 140 national and international exhibitor companies from the main areas of stage technologies presenting their unique know-how, within; machinery, lighting, sound, stagecraft, rigging, fabrics, seating, accessories, safety gear and training.Read More
Did you ever wonder about the processing behind the Spat Revolution Immersive Audio Engine? How latency is handled, or how Spat is handling speaker alignment to achieve smooth spatialization? Or are you interested in getting to know more about how Live Theatrical show setups can be done using Spat Revolution?
A blog read by Hugo Larin, Business Development for Spat Revolution
POWERING THE SPAT ENGINE, THE AUDIO INTERFACE AND THE SYSTEM LATENCY
Spat Revolution is a real time stand-alone software application for processing immersive audio, and regarding the processing behind it; although some immersive audio solutions will come with dedicated hardware with fix capacities, Spat doesn’t run on any specific hardware. The power of today’s modern computers have proven us what can be achieved, instead Windows and MacOS is supported, with hardware recommendations provided, a solid graphic card, sufficient amount of ram and multi core processing, in order to achieve good results.
The ability to run a generic hardware also means that a vast pool of audio interfaces are available for the system setup. You could obviously then feel like asking what the actual latency of the system is. In Spat Revolution the latency is defined by the audio hardware components (local audio interface or network AVB, Dante or AES67 virtual audio entities) and the buffer size. The system actually shows the total latency in a status window in the software, combining the OS reported latency of the audio component and the Spat buffer setting. The actual Spat setup and configuration (number of sources, rooms and such) don’t have impact on the latency. It’s predictable and fixed.
Figure 1: Hardware IO and Devices in Spat Revolution
When latency is critical we do have the option of a very low buffer size setting (again, taken the hardware qualification into account). Spat has been tested for In Ear Monitor scenarios, so this is a good example of how low latency can be achieved, as we know latency will be critical if delivering binaural content to a vocalist or musician.
Figure 2: Latency report in Spat Software
Multiple Rooms, Panning Techniques, Dealing with Speaker Setup for Live
SPEAKER ARRANGEMENT ALIGNMENT PROVIDED IN EACH SPAT ROOM
Although Spat is not intended to replace the loudspeaker management for tuning, it does offer the ability to compute each speaker output gain and delay in order to compensate for the location compromises that may need to be done because of physical limitations. The auto compute will make a delay and gain calibration to the central ref. point, when trying to provide extremely smooth transition and when your sources are actually moving in the soundscape all the time. Note that the actual panning computing will use the compensated speakers, the Virtual Speaker, resulting in very smooth feeling when moving the sources.
Figure 3: The Speaker Config window and the ability to ”Computer” Gain and Delay.
See physical location of speaker in grey and the virtual speaker in orange
HOW USERS ARE DEPLOYING SPAT WITHIN LIVE AND THE ‘’VIRTUAL ROOM’’ SPEAKER ARRANGEMENTS
Spat Revolution is becoming more and more used in live theatrical shows, and many of these systems have common workflows. QLab is often used as the playback and show control system, while for live sound console the Avid VENUE S6L is often the console of choice. The Spat Send plugin is used, for example, in these scenarios to provide the mixing engineer with the ability to access the source parameters on the control surface, and for automating source parameter changes via the snapshot system. Something that can be done using generic OSC commands with other mixing system providing that ability, such as the Digico SD consoles.
That said, a common scenario is, using the QLab network cues to change any parameters of sources, rooms and reverb in Spat. This includes making 2D moves of sources in the soundscape. QLab is very common for show control, where their network and midi cues gets used to drive a variety of systems simultaneously. An interesting point of using QLab and Spat in the off location creation process is how you can start the show creation and move to separate computers and add audio mixing console in the equation.
The subject of remote control in itself will deserve a new blog!
The common integration varies from using 32, 64 or more console post fader channel audio feeds sent via MADI or Network audio to Spat, feeding the sources. Speaker outputs then either get returned to the desk in order to feed the various matrix of the mixing system, or are picked up directly by MADI or Network audio and sent to the loudspeaker management system.
In a recent big top production, Spat was dealing with a circular stage positioned at the center of the four big top masts. The audience was sitting on three of the faces, while the last face was the actual backstage. Each of the three audience spaces where covered from a L and R speaker cluster position on the mast, while a third speaker position was in the center at a higher elevation. A single behind stage L and R speaker cluster position was used as well (L and R when facing the main front face).
All around the big top was 6 rear ‘’surround’’ speakers, that were positioned in a way that each pair of 2 speakers where covering an audience space as rear speakers.
The multi-room concept was deployed for this. A first virtual room consisting of 5 outputs (L, C, R and 2 rears) similar to a surround was used to create a multichannel bed that was delivered to the 3 audience spaces. This allowed for delivery of an immersive base mix to each of the spaces using a VBAP panning technique.
While the rears were used for the base surround bed, a separate virtual Virtual room using the 6 surround speakers was deployed in order to be able to do rear effects to the complete audience, for example, when wanting to spin a sound around the big top. This virtual room included as well the backstage L and R speakers and the gain and delay alignment of Spat aligned all speakers to their virtual position. The key of adding the two backstage speakers was to give a feeling of sound coming from behind the stage or spinning behind the stage, and again, a VBAP panning technique was used for this virtual room.
Another Spat Room was using all single and cluster loudspeakers, this time with a KNN panning technique in order to allow to position a source anywhere in the audience, in order to really give the feeling that source was emanating from where the designer wanted or where the action was.
To complement this, some sources were able to be patched in a binaural room which became a feed that could be sent to recorders and to provide an spatial experience on headphones.
Figure 4: All Speaker with a KNN Pannings
Figure 5: 5.1 Soundscape deliver to multi zones
Figure 6: Various different Speaker Arrangements and Panning Types used simultaneously in one and the same project
Another theatrical project include the strategy of sending the 12 artist microphone post fader feeds to a Room for doing a real time tracking of these artists (actors on stage). Spat’s ability to integrate with the tracking system allowed for the attachment of tracking devices (know as beacons) to source actors. For this a DBAP panning technique was chosen, as no assumption could be made as too where the audience would be sitting and a central listening point didn’t exist. This provided a good signal distribution while offering some localisation, as the actors were being tracked on this very wide stage. A portion of the speaker rig was used for this one, where the 7 front clusters and their delays were used only. Another Room for a different effect and using a different set of loudspeakers was used too, which received sources from a fix number of console aux buses (send buses to the SPAT immersive audio software)
Interested in this conversation? Follow our different blog articles!
Stay tuned – Subscribe to our newsletter
A blog read by Hugo Larin, Business Development for Spat Revolution
When you find yourself working on a custom multi-channel speaker arrangement, the Speaker Configuration editor is where a model of the sound diffusion system can be defined and stored into the list of speaker configuration presets. From the below image you can see that the current config (a system predefined config) can be, a Duplicate (a copy will be generated for editing) or a New config can be created.
Figure 1: The speaker configuration window showing a pre-defined 13.1 Auro 3D speaker arrangement
Managing the Speaker Configuration includes the ability to delete a config, rename a config, export configuration(s) to a file, or import configuration(s) from a file. Note that Spat Revolution’s pre-defined speaker arrangements can’t be deleted or renamed, but duplicating them (making a copy) will allow you to edit the arrangement thus starting from an existing configuration. The Normalize feature is there to rapidly scale down the speaker arrangement to have the furthest away speaker distance set to 2M only. This helps reduce the virtual room environment size to facilitate working with the parameters range when working with very large speaker setups.
Figure 2: Editing a speaker configuration showing a copy of a 13.1 Auro 3D speaker arrangement
Once editing a speaker configuration, you can either + Add, – Del, Move Up or Move Down speakers in the list. Note that the total number of channels in your arrangement is denoted above the list. Your speaker system contains a Low Frequency LFE channel where you want the ability to send audio to it like on an aux system? Simply adding a channel (or channels), called LFE, will do the magic for you here directly. This particular channel won’t be fed from the virtual room panning, but by the LFE Send on each of the sources that will be available on rooms containing an LFE. Obviously without the room reverberation.
Position information of the loudspeaker can be entered as X, Y, Z in meters or with Azimuth degree, Elevation degree and Distance in meters. Delay and Gain can be used to manually align the physical speaker location to a virtual one, essentially creating a virtual speaker.
Wondering what the Compute and Reset is all about? Spat Revolution can use the measurements you put into that speaker arrangement to calculate (and apply) the delays and gains for rendering spatial audio on a speaker physical configuration that may not have speakers located in ideal locations. This is a technique preconized when using panning methods that are sweet spot centric such as VBAP and VBIP. The methods will provide very smooth panning on arrangements that have all speaker equidistant to the optimum listening position. So this is what the compute will do for you when desired and using such panning method. You don’t need to move the speakers, Compute the delays and gain for optimal panning experience. Note that it is preferable to do this in Spat Revolution instead of external processing as Spat will use the computed speaker locations (the virtual speakers) for actually spatializing afterward. This is an advanced speaker management technique made easily accessible by a single press of the Compute Speaker Alignment button.
In some cases, creating speaker setups in an editor is not the most efficient way, primarily when such information is available as a list and was exported by an acoustic and design simulation software like those used with loudspeaker companies. So the speaker config work was already done onced. Although an advanced technique to be used with precaution, a Custom Speaker Arrangement file template is available from the Flux:: Support team that can allow you to enter your Speaker arrangement configuration along with the different speakers location in it. This methods allows to input speaker configuration (FromCartesian) raw data, Polar (FromSpherical) data, or by creating speaker layers with specific angles. All this can be quite practical for some larger more complex setups.
Interested in this conversation?
Follow our news and blogs
Stay tuned – Subscribe to our newsletter
The Tonmeistertagung meet-up in Cologne, Germany, is a yearly congress and exhibition providing an extensive overview of the latest trends in the pro audio industry, gathering leading experts and professionals from the; broadcast, recording, theatre, film and television industry to share their experiences.
This year Flux:: is represented by the live sound engineer & S.E.A product specialist; Jonas Gehrmann, talking about the possibilities Immersive Audio currently offers, and how to create and build an Immersive sound experience for the audience using the AVID VENUE | S6L console and the Spat Revolution Immersive Audio Engine.
The Immersive Audio demonstrations are taking place in the AVID & S.E.A demo room; 2-Demo-H, setup for full on immersive audio integration demonstrations with the AVID S6L.
Daily 11.00, 13.00, 15.00 & 17.00 (Saturday only 11.00 & 13.00)
For the second time, the Student 3D Audio Competition for high-quality productions in Ambisonics 3D Audio is facilitated by the Tonmeistertagung. During the show, the jury presents the nominee productions in an ideal 3D audio listening environment, announce the winners, and hand out the awards.
Student 3D Audio Competition 2018
The post Tonmeistertagung Meet-up in Cologne – November 14-17th appeared first on Flux::.Read More
Come and visit us at the International Broadcast Equipment Exhibition 2018 in Tokyo, November 14-16th, where you will find us together with our partner and distributor, Media Integration.
International Broadcast Equipment Exhibition, “The Professional Show for Audio, Video and Communications”, is a 3-day event being held from 14th November to the 16th November 2018 at the Makuhari Messe in Chiba, Japan.
This year at InterBee Flux:: founder and lead developer, Gaël Martinet, together with our Japanese partner, Media Integration (Hall 2 / 2115), will be presenting Spat Revolution Immersive Audio Engine and its unique features, high degree of create freedom, routing and transcoding capabilities, as well as Dolby Atmos, Ambisonics, NHK 22.2 and many of the additional formats supported by Spat Revolution.
The latest immersive audio workflow using Spat Revolution and ProTools will be presented at the Main Stage of Avid Japan – Everyday at 15:20
Together with Avid Live Sound partner Audio Brains, further presentations will focus on Avid VENUE S6L integration with the Spat Revolution Real Time Live Audio Engine. The presentations will discuss the integration of virtual source control in Avid S6L console and the power of the snapshot system to create automation with timing interpolation parameters.
Avid VENUE S6L integration with Spat Revolution at Audio Brains – Everyday at 14:00
More details can be found here (In Japanese)
Media Integration at InterBee
Today we release Spat Revolution version 1.1 containing a range of improvements and optimizations to the GUI, the Automation and OSC implementation, the Spat Plugins, the Speaker Configuration and the overall software stability
In addition to this, an extensive list of new features and functions has been implemented enhancing system configuration setup and external integration, and improving the user interaction and creative workflow, including:
Added local audio path support for AU, VST and AAX Spat Send and Spat Return Plugin
This provides the ability to route audio to and from your DAW software via AAX, AU and VST Spat Send and Return Plugins for a complete local integration on your local computer.
New speaker arrangement presets and enhanced Speaker Config dialog with normalization function, background image and new editing modes
This simplifies the creation and editing of speaker arrangements.
Added ability to change listener position with 6 degrees of freedom (X, Y, Z, Yaw, Pitch, Roll); working in all different room types (fully automatable)
Supporting Automation, OSC and Real Time Tracking System control.
Added Real Time tracking system integration (BlackTrax RTTrPM protocol) for sources and listener positions)
This provide the ability to integrate BlackTrax RTTpTM protocol for real-time tracking with tracking beacons assigned to a source or listener position.
Added support for non-relative direction of the source
This change the behavior of the source where instead of a continuous direction YAW following the center of the room or listening position, sources now have the option to keep their direction (YAW). Source to keep their direction YAW instead of always pointing to central reference point).
Added Reverb functions including “Late Reverb” Factor and “Main Reverb” Gain, Reverb Early/Cluster/Tail on/off per source and import and export preset function
This includes additions to reverb section of sources and main reverb section of each rooms
Other additions included
Improved multi-channel and multi-source handling with new source coordinate mode option. Polar and Cartesian are provided.
In Polar , AED (radiation) parameters will be sent by default to OSC whenever changing X,Y Z or AED parameters. . (Note the ability to force AED or XYZ in OSC preference for specific OSC Output slots) The Radiation system will be the internal functioning mode of coordinates and incoming XYZ parameters will be interpreted in Polar.
In Cartesian, XYZ coordinates will be sent by default to OSC whenever changing X,Y Z or AED parameters. (Note the ability to force AED or XYZ in OSC preference for specific OSC Output slots) The Cartesian position system will be the internal functioning mode of coordinates and incoming AED parameters will be interpreted in Cartesian.
Add duration to Spat Plug in parameters
Spat Send plugin now includes individual and global duration parameters to the Spat Send Plugin allowing to execute parameter changes in a set duration time.
Added HOA Decoder settings in transcoder
Added OSC Implementation
Implementation includes new ping pong property, OSC Message creation from Python, Send Objects Index as OSC argument and add a function that send all the position data (aed) simultaneously to the selected osc output. Duration and timing support
Added Master Routing patch options
Added Global Room Gain and room presence factor
Scaling factors added (distance, elevation, barycentric scale, and various others)
Added “Convert To Hardware IO” to convert blocks from Send/Return to Hardware IO
Enhancement for AAX VENUE S6L plug in including duration parameter attached to each Spat Plug in source parameters
The Spat Send plugin now includes individual and global duration parameters to the Spat Send Plugin allowing to execute parameter changes in a set duration time.
Enhanced HRTF implementation and HRTF files management
Optimized 3D Nebula display
General optimizations to the GUI, the Automation and OSC implementation, the Spat Plugins, the Speaker Configuration and the overall software stability
Full Release Notes
Spat Revolution is available for testing in full version with a 30-day time limited license.
An iLok user account is required (No USB dongle needed).
Download and Installation
Spat Revolution Version 1.1 can be downloaded and installed using Flux:: Center download and installation manager.
How to download and install software using Flux:: Center
Technical Supplier: Feedback Show Systems & Service GmbH
Immersive Audio Consultants: Sphereo – Jonas Gehrmann & Fabian Knauber
Sound Designer: Cedric Beatty
FOH Engineer: Tobias Wallraff
October 11th this year the VIVID Grand show had its premiere at the Friedrichstadt-Palast theater, one of Europe largest and newest venues located in the theater district Mitte in Berlin.
The show is a declaration of love to life featuring more than 100 stunning artists on the world’s biggest theater stage, with a long-term residency plan and a production budget of 12 million euros for spectacular costumes, and stage sets of unparalleled dimensions.
The sound design team turned to Flux:: and the Spat Revolution Immersive Audio Engine, which is providing the main virtual room, a 5th order Ambisonic (HOA) soundscape to achieve smooth and precise crossover panning through different systems.
One of the important requirements that made Spat Revolution the obvious choice for this show, was to be able to transcode the Ambisonic mix to different channel based speaker arrangements, which is one of the important core features of the Spat Revolution.
’With adjusting the distances between all speaker positions we accomplished a very smooth panning for 360° without huge level drops’’ – States Jonas Gehrmann, 3D Audio consultant from Sphereo.
Another important task for this project, was to provide an additional immersive mix for the Wall Sky Lounge, a special VIP area with a 4.1 Channel based system, and the backstage mixes in stereo. Both these things are achieved with the Spat Revolution by simply down-mixing the original 5th order Ambisonic mix using different transcoders simultaneously.
At FOH on this project an Avid S6L 32D mixing system is used, feeding two Spat Revolution engine servers (Mac Pro) via MADI while Pro Tools is providing playback and automation using the Spat Send plugins.
The in-house classic system is comprised of a main LCR arrangement together with surround speakers. The original routing of this system was using single console auxes to single surround positions in order to pan sources manually in the house.
Since the house system is also used for guest productions, who need a traditional setup, and that neither space or budget allowed for a second immersive system, the actual current house system alone is used for the immersive integration. Mind the whole visual design of this spectacular show!
Composed of different loudspeaker brand and models, the system includes main hangs and delays from Turbosound (FLEX ARRAY) and Surround hangs with FLEX ARRAY and Meyer Sound M’elodie. The Surround speakers include Meyer Sound UPJ and finally Turbosound FLEX and Meyer Sound MM4 completed the system with fills.
Flux:: sound and picture development was founded in the 1990’s during the early days of digital audio software workstations, collaborating with Merging Technologies in the creation of Merging’s now well renowned products.
In 2007 Flux:: started releasing their own exquisite audio software product line tailored for demanding sound engineers, and has since then been focused on creating intuitive and technically innovative audio software tools, used by sound engineers and producers in the professional audio, broadcast, post production and mastering industry all over the world.