Flux:: sound and picture development was founded in the 1990’s during the early days of digital audio software workstations, collaborating with Merging Technologies in the creation of Merging’s now well renowned products.
Seoul, Korea – March 2021
Teaching religion and spirituality to young people has always been a challenge, and even more so with the limited attention spans of todays’ media-savvy youth. Korea’s Presbyterian Association aims to address the challenge head on with the opening of the new Jibet Center in Seoul.
Occupying the entire third floor of the new 100th Anniversary Memorial Hall, the Jibet Center teaches the tenets of Christianity via an interactive environment that includes immersive audio via FLUX:: Immersive SPAT Revolution.
Myungsung Nano Systems has created an immersive system implementing a 12.1 channel audio setup designed by Dohyung Kim, with music and sound designed by electronic musician Joonha Kim.
The space includes four-sided interconnected walls of video and immersive audio, designed for flexibility and versatility. Adjacent the main space is a fully equipped recording studio, and students are encouraged to create and contribute their own content as well.
” There are always challenges in live situations, but SPAT is really the best solution for this installation, and adapts to our needs easily. It is a powerful tool that can bring out my creativity. I have used it to try some new arrangements of my music for binaural or immersive shows, and it always inspires me to try new things. ”
Early adopters of immersive audio, Myungsang Nano Systems have been behind a number of projects on the Korean peninsula, including the recent Dolby Atmos system at Sungkyul University’s Department of Arts.
As Dohyung Kim explains, the selection of SPAT Revolution for the Jibet Center project was a logical one.
“SPAT’s integration with multiple DAW platforms, OSC, and MIDI, and its powerful flexibility were of paramount importance for an environment like this one, where children create multimedia content for immediate implementation. SPAT enabled us to program a system where azimuth can be programmed to interact directly with, for example, an external synthesizer’s LFO controls, or to enable the pitch control to move the Y-axis position. We were able to preconfigure speaker arrangements, panning, and routing to make it easy for the children to come in and use the system.”
“There are always challenges in live situations, but SPAT is really the best solution for this installation, and adapts to our needs easily. It is a powerful tool that can bring out my creativity. I have used it to try some new arrangements of my music for binaural or immersive shows, and it always inspires me to try new things.”
The post Seoul’s Jibet Center Brings Immersive Learning with SPAT Revolution appeared first on FLUX:: IMMERSIVE.Read More
South Bend, IN – February 2021
In the city of South Bend, one production company has found a unique solution to get artists and crew back to work, and bring music and shows to audiences hungry for arts and culture, creating an immersive environment using SPAT Revolution from FLUX:: Immersive.
Employing an innovative 8-channel setup and a lot of creative imagination, the South Bend Symphony Orchestra has created an immersive performance experience that not only brings the listener into the symphony hall, but right up onstage with the orchestra.
The project began life as an inquiry from Halle McGuire with the South Bend Symphony. The Symphony wanted to know if it would be possible to create an installed sound system that could accurately reproduce the experience of listening to an orchestra, but could be set up in a space where people could walk through and maintain social distancing.
“For us, the challenge was to think outside the box and find a way to make music and employ musicians safely, and bring music to an arts-deprived audience,”
explains Eric Friedlander, Sound Designer for the Octet and Production Manager for the Festival.
“We had some discussions about potential concepts, and after identifying the boundaries of the project – prospective spaces, sizes of ensembles we could safely record, etc. – I suggested a semi-portable circular sound system where each recorded instrument would be assigned to a loudspeaker. I had already seen a demo of SPAT at an Infocomm show, and I kind of had the idea in my back pocket of creating this 3D experience using SPAT. This seemed like the perfect opportunity.”
After presenting proof of concept and getting signoff from the Symphony, the next step was building a semi-portable system, comprising eight JBL CBT50 speakers on tripods. Friedlander utilized the WAVES LV1 platform to record the octet, enabling him to easily import and export to and from Pro Tools using Soundgrid and run SPAT as inserts.
“It gave me a ton of flexibility. We routed each instrument through its own output and the result is a small scale immersive experience that will allow you to hear eight musicians playing in an octet in a way that no one has really been able to experience before.”
“When you’re in the audience, you’re separated by the stage, and the musicians are all on stage,” McGuire observes.
“You get the whole sound coming at you but you never get to feel like you’re onstage.”
Friedlander agrees. “People will be able to walk around and they’ll kind of be able to mix it themselves. They can walk around and hear the interplay between various instruments, stand in the center, and essentially create their own experience.”
“Mixing to SPAT was a blast,” Friedlander enthuses.
“It was so exciting to be able to take my flat mix, tuned the way we wanted it, and just expand that out into an immersive and engaging 3D space. I made good use of the timbre controls in SPAT, as it was a really neat way to fine tune the sound of the instrument in the immersive space, without necessarily changing EQ and the overall sound that we’d achieved pre-immersive mix. It just was a great way to really polish the mix we had and better fit it into the space to get the realism we wanted.”
” Using SPAT and other mobile tools to build an immersive, object-based mix enabled us to do things that wouldn’t have been possible in a traditional mix environment, and give the symphony and the arts community something they can use to provide a new and totally different experience in the future. ”
Friedlander also gave high marks to the reverb engine in SPAT.
“It was a total gamechanger. Having the very clear, very direct sound of each instrument, it gave us a lot of leeway to decide how to best apply the reverb engine and think about what space we wanted to put people in as they were listening. It also really glued the mix together to put everything in the same ‘space’.”
For the Symphony, the promise of this immersive experience goes well beyond merely a temporary solution for coping with the shutdown. As McGuire explains,
“This is something really special that we can give to our community in a time when we can’t give them a traditional live performance, but it’s not only relevant during the pandemic; it’s something we can continue to use. It’s completely portable, so we can take it into a school, to a library, we can set it up in a hospital. We can make live symphonic music available to everyone.”
“I’m really pleased not just with the experience we were able to create, but the fact that we were able to create it at all,” Friedlander concludes.
“Using SPAT and other mobile tools to build an immersive, object-based mix enabled us to do things that wouldn’t have been possible in a traditional mix environment, and give the symphony and the arts community something they can use to provide a new and totally different experience in the future.”
The post South Bend Symphony Orchestra Brings the Audience Onstage with FLUX:: Immersive appeared first on FLUX:: IMMERSIVE.Read More
Spat Revolution and Dolby Atmos workflow
Many users have asked us about the major differences between FLUX:: Immersive Spat Revolution and the Dolby Atmos Production Tools. Particularly with Dolby’s increased focus on music and emphasis on online distribution services for independent artists, questions regarding comparing the two platforms often arise.
Can both these production tools be compared, and what are the Advantages?
This article focuses on Spat Revolution, a standalone application (with its integration plugin suite) and how it differs from the Dolby Atmos Tools. Hopefully some of the information in this article will help demystify the hype – The object-based immersive production hype!
At the base Dolby Atmos is a proposed workflow and a specific deliverable for Dolby Atmos speakers arrangements. One thing we need in immersive productions is good intuitive workflows. The current boom in immersive productions has triggered a greater need for well designed and efficient workflows – multiple creation and diffusion tools working in unison.
Dolby offers integration with selected DAWs, while Spat Revolution endeavors to offer wider options with more plugin formats supported. That said, Dolby offers the ability to create an ADM master, a recognized proposed standard agnostic of the actual format (the object base mix itself). More on this later.
Dolby (and others) are offering commercial solutions where much has to do with licensing, for example, the streaming/distribution services for delivering Dolby content. (e.g., Dolby Atmos theatre venues). Sony Music has their approach here as well for delivering Sony 360 content on various platforms. This is a key spin on these proposals.
A number of differences can be found between Spat Revolution and Dolby Atmos production tools. While we won’t directly compare something as subjective as sound quality, Ircam’s 30 years of research and expertise have made Spat a very high-level technology. With regard to format possibilities, spatialization technologies, adding room simulation, flexible workflows, adapting to various industry domains and deliverables, the two are simply different beasts. They are different but as well can be very complementary.
Object-mixing rendering and formats
Dolby Atmos Production Tools is ultimately an object mixing renderer that outputs a specific speaker arrangement family (Atmos), while Spat offers a wide range of pre-defined common arrangements as well as custom ones. This simply means that you can create for a wide range of configurations for various deliverables, up to a custom immersive installation.
Monitoring on your speaker arrangement or on headphones
Monitoring your object-based mix in a different environment than the deliverable is easy with Spat Revolution; this is great for those situations where you have a smaller or larger monitoring setup. For example, you can monitor on your available system, using a separate virtual room with a different format but with the same object mix, while in parallel rendering your actual deliverable (including an Atmos mix or any other format). Using Spat’s binaural monitoring, also makes it possible to virtualize any speaker arrangement on headphones.
Binaural for headphones
We can highlight possibilities with binaural overall in Spat Revolution, like dealing with HRTF libraries, personalized HRTF, various binaural modes, some not around HRTF, such as Snowman and Spherical models. Dolby does offer some binaural implementation too and has lately worked on this front facing the reality of our audience being isolated on headphones more than ever.
Panning and technologies
In terms of panning algorithms, Dolby utilizes a layer-based approach (LBAP in Spat) which is more forgiving, as opposed to Vector based panning (VBAP), which can sometimes be challenging with non-uniform systems. In all cases, a wide variety of panning options are available in Spat, where Dolby is a fixed panning model. I might mention that both layer-based approaches may not be *exactly* the same. This panning is a good way to work with a speaker elevation layer. The .4 for example that complements your base 7.1.
Acoustic simulation (reverberation)
While Dolby is primarily a panning tool, Spat Revolution brings room acoustic simulation (reverberation) to the virtual spaces where the object mix is done. Much of immersive audio is dependent on how sound is perceived in the real world, and the room effect of Spat helps to reinforce the localization and other aspects.
Consider the possibility of localizing the simulated early reflections of each source object; It brings a huge sense of reality. We can say that Spat goes deeper in source/object properties overall (Many options beyond object position such as perceptual parameters). You can use as little or as much as wanted.
Ultimately, Spat will allow you to create any deliverables, including Atmos, Sony, DTS, or custom sound installation. And it works in stereo too. In fact, Spat can deliver even a scene-based ambisonic mix (agnostic of the speaker system), to be decoded down the line. In addition, Spat Revolution can decode Ambisonic as well.
Regarding agnostic deliverables (channel-based or renderer agnostic), Dolby supports exporting an ADM master from your audio creation workflow. This means, for example, that you can import this object-based mix (ADM BWF.wav audio file with metadata) into a tool such as the Ircam ADMix player and render the various formats with the various technologies. The OSC integration in this player makes it possible to drive the dynamic and static objects in the external renderer. We in our team have done this with Spat Revolution acting as that external renderer.
ADM-OSC – Production tools and how they can be complimentary
Thanks to FLUX::’s involvement in the ADM-OSC initiative, some DAWs now support the optimal integration of external renderers by mapping their panner bidirectionally instead of via plugins. This was added by Nuendo and Merging Technologies at the end of last year, with others expected to follow.
With this, the renderer, such as Spat Revolution, offers a nice integration with the DAW, and since some DAWs can import ADM masters, you end up with an object-based mixing environment that is renderer format agnostic (even if the original creation was made in the Dolby environment).
This shows how a project can start with a creator or mixing in Dolby atmos workflow, but where the production needs to render other formats later. You can think of the opposite scenario where Spat Revolution is creating some audio, 7.1 or 7.1.4, beds with acoustic simulation, and that the prints then get inserted in a complete Dolby production workflow.
The ADM-OSC initiative mentioned above started for the live production side of the immersive wave – From audio capture to live broadcast workflow (where OSC is the main protocol in this industry already)!
Building ecosystems from creators to the distribution and diffusion!Read More
Orleans, France – January 2021
Medical professionals and spiritual healers alike have long acknowledged the healing powers of music. For the Medicine Man Orchestra, it’s the basis of their music, their performance, and their mission.
Born out of a collaboration between French producer Mathieu Insa and Beninese Griot Seidou Barassounon, Medicine Man Orchestra takes their style from the tradition of the Griot, the storytellers and poets and musicians of West Africa. But while the Griot are known for traditional instruments like the kora, goje, and balafon, Medicine Man Orchestra mixes it up, bringing in the synth orchestrations of Alissa Sylla from Senegal along with UK guitarist Justin Adams to blend with percussion from Mélissa Hie and Wura Samba, and the Griot storytelling of frontman Barassounon.
On the visual side, Nicolas Ticot’s XLR Project cranks out stunning graphics to enhance the show, while France Médias Monde and RFI Labo create an immersive audio experience via binaural technology using SPAT Revolution from FLUX::Immersive.
It was a chance meeting at Midem in 2019 that connected the dots between the Medicine Man Orchestra and immersive audio via RFI Labo.
The company has a long history with 3D multimedia, including an ongoing series of productions with their France Médias Monde group. “Mathieu Insa came to our booth at Midem to listen to our SessionLab program where we present artists’ binaural mixes,” recounts RFI Labo Director Xavier Gilbert.
“He told us about his company, Aribo Production, and about the project he was working on with West African Griots music therapy. We knew immediately that Medicine Man Orchestra was a perfect fit for RFI Labo, with its combination of Griot culture and Electronic Music, and with people who were as passionate about their work as we are.”
As composer Alissa Sylla explains, immersive audio is a perfect complement to the troupe’s music and performance. “In the teachings and the culture of the Griot, physical space is a critical factor. The songs, the incantations, the dances, are designed for the outdoors – we could hardly enclose this music and these traditions in a room. Thanks to the immersive audio created by the RFI Labo team and FLUX::Immersive, we are able to begin to recreate that space, to immerse the viewer in a space that is more tangible than a typical stereophonic space.”
“The Medicine Man Orchestra’s music is ideally suited for an immersive mix,” confirms Benoit Le Tirant, 3D audio sound engineer with RFI Labo.
“The balance between the acoustic sources and synthesizers can sometimes be a challenge, and being able to refocus them within the 3D field helps them to be heard yet not stand out from their intended place in the mix.”
” Depending on the song, between 50 and 70 tracks per song are distributed according to their group – harmony, rhythm, percussion, vocals – into four or five binaural rooms within SPAT ”
For live performance, the SPAT Revolution system is coupled with a single PC running Pyramix software. “Depending on the song, between 50 and 70 tracks per song are distributed according to their group – harmony, rhythm, percussion, vocals – into four or five binaural rooms within SPAT,” Le Tirant explains.
“This enables us to have a quick view of each group of instruments, and makes the mix more intuitive and more precise.”
Le Tirant also points to SPAT’s object-oriented approach as a critical part of the workflow. “Being able to link and unlink objects really changes my way of mixing in movement. The visual interface ideal for immediately putting the intent and result of the mix into perspective.”
As with any leading-edge technology, the company is also a critical choice, observes RFI Labo’s Gilbert. “Before choosing a tool, we choose the company behind it. At every point in our productions, we need knowledgeable and responsive help from people who understand the purpose of our artistic work. The strength of SPAT Revolution is the FLUX:: team.”
“With much of what is called ‘World Beat’ music, there is a very dynamic component that is often very electronic and highly produced,” observes Producer Mathieu Insa.
“With Medicine Man Orchestra, we wanted to restore the relationship between the modern and traditional. The use of immersive audio highlights the elegance of Alissa Sylla’s electronic sounds with the spectral griotic aura of Seidou Barassounon. In short, immersive audio is precisely the right platform to reinforce the sacred content of the project. More than fusion, I would call it almost a magic osmosis”
“Immersive audio can be applied to all styles of music – you just need to ask yourself the right questions,” offers Le Tirant. “What do we want to convey to the listener? With MMO, we have the added dimension of music therapy. Immersive audio is perfect for this application, where the music must immerse the listener within the music and its movements.”
“Especially in these very challenging times, the Medicine Man Orchestra project, which is created around the idea of being therapeutic and calming, is surely quite appropriate,” concludes Sylla.
“These therapies are very important in the face of issues of anxiety and isolation.”
Medicine Man Orchestra’s album will be released February 5, 2021 on all major streaming platforms, uniquely in binaural.
The post RFI Labo Takes Medicine Man Orchestra Immersive with SPAT Revolution appeared first on FLUX:: IMMERSIVE.Read More
Montreal, Canada – February 2021
Founded in 1969, the University of Quebec at Montreal (UQAM) is one of the largest French-language universities in the world, enjoying worldwide recognition for its academic excellence and its innovation in the arts. One of UQAM’s more popular courses of study is its Media School, where students can major in all facets of audio and media production.
The school was one of the first in the country to offer courses in multimedia and immersive audio, and has been building a state-of-the-art multichannel facility and research center, Hexagram, with immersive technology provided by SPAT Revolution from FLUX:: Immersive.
Professor Simon-Pierre Gourd oversees the Media School’s Sound Creation and Experimental Media labs, and has a long history working with immersive audio. A founding member of the Institut Universitaire des Nouveaux Médias, Gourd reflects on the rapid evolution of immersive audio.
“In the early days of immersive audio, we started out experimenting with an old Tascam console, using the cue outs and direct outs to create an 8-channel mix,” he explains.
“Eventually we moved on to ADAT and then DAW. Needless to say, it was all fairly primitive, and it was always a struggle to achieve the environment I had in my head with the tools we had to work with. I call it the Technological Tango – the dance between what your tools are designed to do, and what you want to make them do.”
The introduction of FLUX:: Immersive software profoundly changed the equation, says Gourd.
“Object-based mixing is a completely different approach to immersive audio. Sounds are no longer related to a channel. Now, each sound can be an object in a virtual space.”
One of the most impressive aspects of immersive audio is also one of its biggest challenges: that is, the almost limitless range of possibilities. Artists are not restricted in channel counts or configurations, and that can mean that a composition created for a 16-channel circular arrangement might end up being played back in a 36-channel immersive dome environment. As Gourd explains, this can easily become problematic.
“There are a lot of composers working in dome-type environments with widely varying channel counts and different speaker placements, which require different angles. We’ve done a number of events where we’ve featured multiple composers and multiple configurations, including both immersive and binaural. In one case, we had a guy deliver a piece done in 22.2; our system has 32 channels, with speakers in much different locations. We were able to load the project into SPAT and easily reconfigure it. About an hour of tweaking and fine-tuning and everything worked.”
Unlike many other immersive platforms, SPAT also enables users to create systems using the speakers and hardware of their choosing. “A lot of the systems available today are closed architecture,” Gourd offers.
“You have to buy their speakers and their processors, and that can be restrictive and expensive. With SPAT, we can remain hardware-agnostic, choosing whatever loudspeakers we prefer and working within different platforms.”
” We have something like 60 researchers here, working in film, video, artificial intelligence, and other forms of media. Some of us work in Pro Tools, some with Max or other software. With SPAT we can accommodate everyone’s systems and everyone’s workflow. ”
That flexibility is ideal for a university environment with multiple departments and disciplines.
“We have something like 60 researchers here, working in film, video, artificial intelligence, and other forms of media. Some of us work in Pro Tools, some with Max or other software. With SPAT we can accommodate everyone’s systems and everyone’s workflow.”
For UQAM students this also enables them to dabble in a wide range of experiences.
“Our students have the opportunity to try pretty much everything having to do with audio and media production. Whether it’s sound design, audio for games, or experimental media, we are able to provide them the environment to sample different disciplines and see what appeals to them. SPAT allows us to create any type of production environment.”
For Simon-Pierre Gourd, SPAT Revolution has marked a paradigm shift in his approach to immersive mixing.
“SPAT’s object based mixing has been a complete game changer. After so many years of fighting with the technology, we now have an interface that enables me to create the concepts I had in my head.”
The post SPAT Revolution Helps UQAM University Professor Bring Immersive to Life appeared first on FLUX:: IMMERSIVE.Read More
In beta now and about to be released soon, the biggest and most comprehensive release of Spat Revolution since it was released. We couldn’t resist but give you a quick preview of some of the new features and workflow improvements, including snapshot system, setup wizard, drag and drop tool, new speaker arrangement imports and transformations, new reverb presets, new panning and spatial technologies, OSC refactoring and support of ADM-OSC, and more processing resources, like never before, with a new multithreading algorithm, and a lot of other things.
Spat Revolution is, to say it straight – The most adaptable Immersive mixing tool ever createdSetup Wizardry – Easy creation of your configuration – Like never before Thursday November 5th – 9.00 AM Los Angeles / 12.00 PM Montreal / 6.00 PM Paris
Join us for a Live Stream Webinar Presentation on all the new System Setup features on Thursday November 5th
9.00 AM Los Angeles / 12.00 PM Montreal / 6.00 PM Paris
Join us for a Live Stream Webinar Presentation on all the new Speaker Arrangement features on Thursday November 12th
9.00 AM Los Angeles / 12.00 PM Montreal / 6.00 PM Paris
Using Spat Revolution in standalone and you want to recall a snapshot manually, or via OSC to change your audio mix scene? A new snapshot system with interpolation recall time is now presented.
Need to do a transformation to move all your sources? Use the new source transform feature to move the objects with an interpolation time!
Join us for a Live Stream Webinar Presentation on Managing and Manipulating your Object-based mix with Snapshots and Source Transformation on Thursday November 19th
9.00 AM Los Angeles / 12.00 PM Montreal / 6.00 PM Paris
Now we’re Adding Layer Based Amplitude panning (LBAP) and Dual Band Vector Based Panning (DualBandVBP) to our channel-based panning option. We will be previewing support of our upcoming Wave Field Synthesis WSF
Three new methods of binaural audio; NearField Binaural, Snowman Model, and Spherical Head Model. Furthermore, new acoustic simulation presets are available inside the virtual room.
Join us for a Live Stream Webinar Presentation on New Panning and Spatial Technologies with new Acoustic Simulation presets on Thursday November 26th
9.00 AM Los Angeles / 12.00 PM Montreal / 6.00 PM Paris
Time for OSC refactoring with 8 new OSC slots, OSC presets for input and output configuration to common third party devices, OSC transform for dealing with normalization and scaling.
First release for the support of ADM-OSC, an industry initiative led by FLUX:: and peers to the standardization of a common language in live production.
ADM OSC is an extension of ADM where OSC is the proposed protocol for connecting next-generation audio systems and the live production workflow of object-based audio. From live performance to the broadcast serialized ADM.
Join us for a Live Stream Webinar Presentation on the Spat Revolution OSC refactoring and ADM-OSC support on Thursday December 3rd
9.00 AM Los Angeles / 12.00 PM Montreal / 6.00 PM Paris
The post Spat Revolution Major Release Coming Soon – A Huge Leap in Power and Performance! appeared first on FLUX:: IMMERSIVE.Read More
Doha, Qatar – October 2020
Even in a skyline as futuristic as Qatar’s, the new National Museum of Qatar stands out as visually stunning. Designed by renowned architect Jean Nouvel, the museum’s modernistic exterior, adorned with 539 massive disks that lend it an almost other-worldly character, is home to more than 86,000 square feet of state-of-the-art exhibit space, with multiple interconnected galleries offering towering walls of video and immersive, multi-dimensional audio created using Flux::Immersive’s Spat Revolution.
The museum’s groundbreaking immersive audio design was created by Basel, Switzerland-based Idee und Klang, a pioneering multimedia agency acclaimed for their unique approach for integrating sound scenography with immersive audio design. The company has been working in immersive audio for more than 15 years, and as company founder Ramon De Marco explains, the Qatar project was unique on many levels.
“The museum itself is a very large and unusual space, with each of the galleries having its own architectural and acoustical character,” he offers.
“Each gallery has its own theme, with different video presentations and of course, different audio accompanying that, each one created by different composers and sound designers.”
With multi-channel sound and essential aspect of each gallery’s presentation, the complexities of producing and mixing multiple channels of audio for each space was just the beginning.
“We specified 64 channels for each gallery – almost 700 speakers in all,” reports audio scenographer and designer Daniel Teige. Meyer Sound provided the loudspeakers for the project, which was mixed using Spat Revolution.
Placement of the loudspeakers was also an issue, adds De Marco.
“The architects were adamant that they did not want to see any speakers. They originally wanted speakers only on the ceiling, which was about 20 meters high. Achieving any kind of directionality would have been pretty much impossible, and eventually we were able to lobby for locating some speakers at the bottom of the walls, where we had about eight centimeters of clearance. Meyer Sound had a compact speaker, the UP4slim, that we were able to implement in that limited space.”
Creating an immersive audio experience for even a single space can be a challenge, but creating multiple environments for adjoining spaces is an order of magnitude more complex.
“A lot of sound scenography is about understanding spaces, and how sound behaves in a space,” Teige observes.
“Using Spat, we were able to create a concept of how to deal with the sound, and adapt it to the theme of each gallery.”
” We compared several different immersive software programs, and Spat was really the most intuitive, as well as the best sonic quality. It’s far more than just a surround panner – it’s a room simulation tool that offers many opportunities to choose different algorithms ”
In addition to the sonic issues of overlap from adjacent galleries, De Marco points to the project’s sheer diversity as another challenge.
“We worked with each of the composers to create style guides, determine what sounds they would use, and how it would all be configured,” he observes.
“Most of the composers were more experienced in composing for TV documentaries, and not necessarily accustomed to creating for multichannel immersive environments. We worked with them not only in terms of what kind of sounds they would use, but also the material they would deliver, so we were able to adapt it to 50 or 60 channel mixes. Instead of receiving one or two WAV files, we were getting 50 or 60 files to orchestrate into a space.”
Consistency was also a factor, he adds. “We received everything from perfectly assembled multichannel sessions to some rather challenging ones. In some cases, we did a big part of the sound design ourselves, because the material we received was simply not enough.”
Composer Stratis Skandalakis has a long history of creating immersive compositions for exhibitions, including travelling globally to gather field recordings for his work. Skandalakis worked with a wide array of composers and circumstances, mixing each of the projects using Spat Revolution.
“It was pretty challenging at times,” he reports.
“Some of the composers had no previous experience with multichannel mixing, and were not accustomed to having a conversation about for example, where to place a sound in a 360-degree space. Often I would be working with a director on my left, and a sound designer on my right, and dealing with a very complex mix with maybe 50 channels, explaining why it takes time for the additional sound design.”
Skandalakis points to working with Spat as key to addressing the widely diverse range of material the team had to work with.
“It was a first for me to work with multiple formats at the same time,” he observes.
“With Spat we were able to take stereo, mono, ambisonic, 5.1, and throw them all together. It was a great tool for sound design and for special effects as well. You can make very fast movements, really play with the object based mixing.”
Particularly with the compressed timeframe, there was not a lot of opportunity for long involved mixing sessions. Staging and designing the systems in advance was key to making it work, and much of the planning and acoustic modeling was begun at the Idee und Klang studios in Basel.
Working with Spat Revolution was a tremendous time-saver, says Teige, enabling the team to create the initial sound design quickly and efficiently before taking it to the actual galleries for fine tuning.
“You have an idea and you want to implement it quickly,” says Teige.
“Spat’s object based mixing is really intuitive to work with, and that enabled us to stay in the creative space and work fast and efficiently.”
For De Marco and the Idee und Klang team, this was their first experience working with Spat Revolution.
“We compared several different immersive software programs, and Spat was really the most intuitive, as well as the best sonic quality,” he observes.
“It’s far more than just a surround panner – it’s a room simulation tool that offers many opportunities to choose different algorithms. And the people at Flux were great to work with. Gaël and the others were really supportive. Any time we ran into an issue, we would call and right away, we would have a new update. Spat is really the only software I would choose for a project like this one.”
The post Mixing Immersive with Spat Revolution at Qatar’s new National Museum appeared first on FLUX:: IMMERSIVE.Read More
Jinan, China – October 2020
When China’s Jinan Olympic Center Donghe Stadium opened its doors to the public on September 16, the Ministry of Culture and Tourism and the Shandong Provincial People’s Government co-sponsored a remarkable symphonic presentation, treating the audience of 2000 to the region’s first immersive orchestral performance.
The “Yellow River Flowing Into the Sea” concert featured an orchestra of nearly 160 pieces, as well as an 800-person chorus and guest appearances by nearly a dozen famous singers. The massive event utilized the power of Flux:: Immersive Spat Revolution to bring the audience inside the music.
FOH engineer Yong Xu mixed the show on an SSL L500 with L100 sidecar, integrated with dual Spat Revolution workstations running dedicated 8-core i9 computers, while OB engineer Wenzhao Ma ran a DiGiCo SD7 Quantum, also using dual Spat Revolution system. MADI connectivity via MGB devices ensured full redundancy.
“We used Spat on both the live and OB production,” reports systems engineer Wenzhao Ma.
“We divided the orchestra into 15 stereo stems and one mono stem to the FOH desk, and then flowed it into Spat using MADI. For the OB mix, I tried to put all the inputs into Spat, including functional channels like background music, ambient and audience mics. I did not use reverb tails on those elements, but but I used the acoustic simulation in Spat’s room effect to leave a little bit of early reflections, to give a sense of space.”
” We divided the orchestra into 15 stereo stems and one mono stem to the FOH desk, and then flowed it into Spat using MADI. ”
The object-based FOH mix consisted of more than 20 stems from the SSL console, with Spat Revolution rendering the mix to the LCR system to provide imaging and acoustic simulation. At the OB position, a mix of 21 stereo stems and two mono stems from the DiGiCo console utilized Spat to render stereo, 5.1 surround, and full binaural synthesis mixes.
For Wenzhao, the show was his first experience working with Spat, and he was more than impressed.
“The functions are powerful, and the sound is amazing,” he enthuses.
“I really like the flexibility to be able to work in any configuration, even on headphones. Immersive and spatial audio is more than just a technical ‘trick’ – it enables us to improve and enhance the sound and the performance by adding dimension. Spat gives me the flexibility to show the composers what surprises are possible with their creations.”
As Wenzhao explains, the stadium required a fair amount of preparation for the event.
“The Jinan Donghe Stadium is a normal sporting venue, not really designed for music events. We had to build a stage and create special rigs to hang the LCR loudspeaker system from a wire.”
Each column comprised six d&b Audiotechnik GLS8 and two GSL12 boxes. Twelve SL-SUBs were stacked in front of the stage.
“We really wanted to hang the subs as well, to avoid too much low-frequency build-up in the front rows, but the weight restrictions prevented us from doing that,” adds Wenzhao.
Two groups of sidefills utilizing SE Audiotechnik M-F3a units were processed in mono.
” I will be presenting this recording to some of our clients, and I will be changing it to a 5.1 or 7 channel mix. With Spat, that is so easy to accomplish. ”
The live recording of the event also added to the challenges, Wenzhao reports.
“The event was being recorded for Blu-Ray, and the recording engineer Mr. Jakob Handel had specified using DPA4099 mics on the strings. We were a bit concerned about capturing a good sound on the strings, as the mics also picked up much of the sound of the brass. But then we found that using Spat to add some natural depth and dimension to the strings solved the problem perfectly; I didn’t even need to any EQ to the strings.”
Wenzhao also gives high marks to the Flux:: Immersive team for their support.
“I think I have read almost an entire e-book from Mr. Hugo Larin and the rest of the team,” he quips. “They really helped us a lot through a challenging setup.”. “One thing that I love about Spat is the ability to easily reconfigure the channel setup,” observes Wenzhao.
“I will be presenting this recording to some of our clients, and I will be changing it to a 5.1 or 7 channel mix. With Spat, that is so easy to accomplish.”, “The show was a great success,” reports Wenzhao.
“I will most certainly be using Spat Revolution for future projects. Our team had the first application of Spat in China, and it is just the beginning. We have a studio in Beijing, and we will create a Spat setup there. I have plans to use it for many upcoming projects.”
The post China’s First Immersive Orchestral Concert Comes Alive with Spat Revolution appeared first on FLUX:: IMMERSIVE.Read More
Milan, Italy – September 2020
To say that Sabino Cannone is a busy man is like calling the Nile River a small stream. Cannone likely has more entries in his calendar in a typical week than most of us have in several months. He moves seamlessly from engineering to mixing and mastering, sound design, consulting, and a dizzying range of other projects, clearly loving every minute of his busy schedule.
Cannone is also no stranger to new technology, including being an early adopter of immersive audio.
“I started with the first 5.1-compatible version of Pro Tools, 20 years ago,” he reveals.
“I mixed the show Pinocchio in Italy, which was the first musical in Europe to use surround. It was a big show – we did a 6.1 mix, as well as a live band – in a large theater; it was a very complex production. The show is still running, and still using the music I did 20 years ago. They did a run on Broadway three years ago, and I also mixed the soundtrack recording in 5.1 surround.”
Several other musicals in surround would follow, as well as a number of installations for visual artists in Italy, Portugal, and Spain. As immersive audio evolved to encompass an ever-widening realm of potential formats, Cannone continued to dive deeper, dabbling in soundscapes for movies, games, and exhibitions. Recently this has included a number of projects using Spat Revolution from Flux:: Immersive.
Cannone’s introduction to Spat Revolution came at the tail end of a project with long-time collaborator David Kahne, creating an immersive sonic environment for National Geographic Encounter: Ocean Odyssey, an installation in New York’s Times Square. As he explains, it was toward the end of the National Geographic project that he first encountered Spat Revolution.
” With Spat Revolution I can do a mix in 5.1 and it will work in 7.1, or in 20.2, it can really adapt to whatever format I need. That’s not just a huge time saver for me, but it also makes it easy for me to explore new creative ideas – It’s a great tool! ”
“In the National Geographic project, we had lots of different environments, with different formats and channel counts. David was the head of the project, and I worked with him in the sound development, creating multichannel soundscapes, as well as the final immersive mix in New York. We both worked in Nuendo, and I also used Pro Tools for the sound development.
We were pretty much done with the mixes when the Flux guys asked me to check out the very first version of Spat Revolution. I started using it for the last part of the soundscape development, and I was blown away with how powerful it was. You were free to create whatever format and channel count you wanted – it was absolutely portable to any setup. I could mix multiple stereo sources to create soundscapes in any format. I created several 5.1 and 7.1 libraries, and used it on some final mixes. I decided right then that I would use it on my next project too.”
That versatility, and the ability to adapt each mix to virtually any multichannel format, was a major factor in his enthusiasm for the program.
“The Ocean Odyssey exhibition is spread over several different rooms,” he observes. “Each room and environment uses several different speaker combinations. The possibility to not have to redo the mix for each venue on installations like this one is a big value for us.”
Spat’s object based protocol was a new concept, Cannone explains.
“This was my first time working in object-based mixing, and I will admit it took some getting used to. For me, the focus of the sound is the most important factor. With object based mixing, you have a lot more freedom in movement of sounds, but with that freedom it becomes more challenging to maintain focus. It was a challenge at first, but once I understood how to manage objects, it was amazing how much easier it was. I think it was good for me that my first experience was using it on a very complex project, with so many options, because it really pushed me to learn about how to keep things in focus.”
Maintaining focus and balance in the mix is important in any format, but even more so when working in immersive, says Cannone.
“I think before you can understand working in immersive, you have to understand balance. All the sounds have to sit nicely in a stereo field before you can spread them to a surround field. And even before stereo, the mix must work in mono. I work a lot in mono – that’s another thing I learned from David Kahne. It’s funny – if you listen to those old Motown recordings, they were all done in mono, and yet they almost sound like stereo, because of the space and the balance. It all goes back to the way the brain perceives sound.”
Needless to say, Cannone is still enthralled with working in immersive audio, and enthused to explore the potential of working with Spat Revolution.
“We have come very far from the early days of immersive,” he points out.
“Years ago, when you did a mix in 5.1 surround, it was only for 5.1. With Spat Revolution, I can do a mix in 5.1 and it will work in 7.1, in 20.2, it can really adapt to whatever format I need. That’s not just a huge time saver for me, but it also makes it easy for me to explore new creative ideas. It’s a great tool.”
The post Sabino Cannone Dives Deep Into Immersive with Spat Revolution appeared first on FLUX:: IMMERSIVE.Read More
Toulouse, France – August 2020
Veteran live sound engineer Johnny Torchy has spent many years mixing shows across his native France and the rest of Europe. He’s also known as an early adopter, and isn’t afraid to try out new tools and technologies in his work.
Most recently Torchy has been driving the faders for French melodic rockers Neko Light Orchestra.
In celebration of the band’s eight anniversary, Neko Light Orchestra booked the 800-seat Le Bascala, in the Bruguières district of Toulouse, for their “Marathon Musical Festival,” featuring eight concerts in 12 hours. As if that wasn’t enough of a challenge, the decision was made to present the show in an immersive environment.
Torchy says the goal was to recreate the location of the various musicians on stage – up to 13 members – while creating a sense of depth using advanced room acoustics simulation. To meet the challenge, Torchy chose object-based immersive mixing via Spat Revolution and Wave Field Synthesis technology.
“Part of the challenge was that WFS typically calls for a large number of loudspeakers to create reliable results,” Torchy observes.
“But in fact, for this project I’ve used only seven line array sources, all in front of the audience. The results were very impressive.”
Torchy’s system utilized seven clusters of four L-Acoustic KARA loudspeakers facing the audience, with seven stacks of two KIVA cabinets on stage lip for front fill. Driving the mix was a Yamaha CL5 console equipped with a Rio 3224D I/O box bringing in 60 inputs. System connectivity was via a Dante DVS network, with a MacBook Pro running FLUX:: Immersive Spat Revolution.
“Most of the sources were sent post-fader to Spat Revolution, except for a few of the more complex instruments using multiple microphones, like piano, harp, and string sections, where we sent summed stems.” Torchy explains.
” Whatever the placement of each musical object, I was able to use Spat Revolution WFS to create the image I had created in the studio. I don’t know of any other system that can do this over such a wide audience area. ”
Using Spat Revolution’s advanced reverb engine enabled Torchy to perform advanced acoustic simulation, recreating perceived dimension and space in ways traditional consoles and PA systems cannot achieve.
“I was shocked,” Torchy reports.
“The sensation was simply incredible. You could literally make the source disappear. Of course there’s still sound coming from the loudspeakers, but the speakers themselves literally become transparent, with the actual wavefront of the source objects dominating. It’s no longer the same as mixing at a console with left and right channel. Its more like mixing in a virtual room, created in Spat Revolution.”
Torchy had previously built a smaller seven-speaker setup to test the system and do preproduction, but the impact in a live setting was simply remarkable.
“With Spat and WFS, I’m able to transmit the imaging and the artistic emotion not just to 15 or 20 percent of the audience that’s seated in the middle, but to the entire audience,” he enthuses. “Whatever the placement of each musical object, I was able to use Spat Revolution WFS to create the image I had created in the studio. I don’t know of any other system that can do this over such a wide audience area.”
For Torchy, this was an experience he will not soon forget.
“It was beyond any other setup I’ve ever deployed. It will be hard to go back to anything less.”
The post FOH Engineer Johnny Torchy on Advancing the Wavefront of Neko Light Orchestra with Spat Revolution appeared first on FLUX:: IMMERSIVE.Read More
Flux:: sound and picture development was founded in the 1990’s during the early days of digital audio software workstations, collaborating with Merging Technologies in the creation of Merging’s now well renowned products.
In 2007 Flux:: started releasing their own exquisite audio software product line tailored for demanding sound engineers, and has since then been focused on creating intuitive and technically innovative audio software tools, used by sound engineers and producers in the professional audio, broadcast, post production and mastering industry all over the world.