Flux:: sound and picture development was founded in the 1990’s during the early days of digital audio software workstations, collaborating with Merging Technologies in the creation of Merging’s now well renowned products.
In the previous two articles in this series, True Peak limiting and Loudness processing, and Limiter Theory – Knowing your tools, we’ve been talking about the general usage of limiters, and explained limiting processing in more detail. Now we will continue with some examples of different workflows that are used with a limiter.
If you are in a hurry, skip down to the TL;DR section of this article to get a summary, as well as a video explanation of the subject.
The very final step
You for sure got this by now; a limiter should be at the very last stage that the signal you are processing is passing through, the ideal place being at the very end of the master bus.
If you are not looking for a huge loudness in your mix, one single limiter will be more than enough. Otherwise, doing all the gain reduction necessary to get a very loud mix with only one limiter is often problematic. This is where many mastering engineers are using multiple limiters and process the peak reduction little by little.
There’s a similar concept to this in the Elixir called Stage, which is exactly like having several Elixir instances in a chain. For example, with Stage set to 4, it is like having four Elixirs chained in a row, and the gain reduction is then applied uniformly between all the stages based on the threshold value.
The best practice while processing a mix with a limiter, is to do it at a constant level. If your limiter automatically compensates for the volume loss of the crest being cut off, it will be more difficult to understand if the limiter is working too hard. So, while you are adjusting the limiter, always compensate with the output level to get the same loudness as the input signal. Once you are satisfied with the result, you can then remove the attenuation.
For example, using the Elixir limiter, if you have set an input gain of 5 dB, lower the threshold with 4 dB and activate the Make Up option, you then would have to lower the output gain to 9 dB to match the input level.
Loudness is better
If you want even more loudness from your mix, you may want to try the following tricks:
But be careful to not end up becoming a loudness war casualty, remember that loudness is not a necessity. Loudness may seem fun at first glance, but it will quickly damage everything that has been meticulously created at the recording and mixing stage.
Immersive audio and limiting
If you are doing immersive audio work, you may be wondering how to best apply limiting processing on content with more than two channels.
When working with classic surround formats like 5.1, 7.1, or even Dolby Atmos beds (5.1.2, 7.1.4, etc.), you will need a limiter that can handle as many channels as there are present in the surround bus. The Elixir plug-in is designed for this, and offers the possibility to process up to 16 channels simultaneously.
In this case, the channel link feature becomes useful. If you have spent time creating an immersive audio sound scene, you don’t want to arm it at the mastering stage. But engaging the channel link sometimes creates over compression. The Elixir features a dynamic mode that will process transients just like there is no channel link but the rest of the processing will be applied identically to all of them.
In case you are dealing with ambisonic streams, then you should always have all the channels linked together without any optimization of any kind (no dynamic mode for Elixir!). It is due to the fact that ambisonic channels are not mapped to any particular speaker, and making gain reduction on only some of them can really harm the sound stage after the decoding. Thanks to Elixir’s 16 channels, it can handle ambisonic streams up to the third order. But be careful after the decoding stage, it may generate audio peaks that cross the threshold.
When mixing in Dolby Atmos, you are dealing with an object-based mixing. Here the conversation becomes complex, because there is, to this day, no easy way to master an object-oriented mix. If you really want to have a final limiter, you will have to only mix with beds. The main issue is that it does limit you to 7.1.2 format. Otherwise, you can try to limit on the object directly, but it will be time consuming and heavy on the processor.
A limiter is used last in the effect chain. If another plug-in is inserted after, there’s a big risk that the level guarantee of the limiter is compromised.
Originally, only a limiter was used on the master, but sound engineers notice that chaining multiple limiters could lead to a more transparent result. With Elixir it is the equivalent of using the stage control.
Modern mastering techniques tend to favor stem processing. Multiple files are sent to mastering, each one of them corresponding to a main bus in the mixing session. Limiting can be applied directly on this bus.
For immersive content, a multi-channel limiter such as Elixir can be used on different kinds of bus size, from quadraphonic to 3rd order ambisonic. For channel-based bus (quadraphonic, 5.1, 7.1.2, etc.), channel-link control helps to find compromise between preservation of sound localization and over-processing. When dealing with ambisonic streams, the channel link should always be on (100% and no auto mode for Elixir).
When dealing with Dolby Atmos, limiting can be used on buses and on objects, but there is no easy way, for now, to link parameters to simplify the workflow.
The post How to Use a Limiter, Part 3 – Advanced processing and Dolby Atmos mastering appeared first on FLUX:: Immersive.Read More
In the previous article in this series, we initiated a conversation about limiting in order to get a rough idea of what limiting is, and what it’s doing. Now we will take a deep dive into the limiter’s gut and get our hands dirty!
If you are in a hurry, skip down to the TL;DR section of this article to get a summary, as well as a video explanation of the subject.
A limiter is a very particular kind of device, bridging the gap between more conventional compression processing and saturation. There is a common definition stating that a limiter is a compressor with a ratio above 10:1. This means that if your input signal exceeds the threshold by 10 dB, there will only be 1 dB above the threshold at the output.
But this definition is not satisfactory for the kind of limiter we use at the end of the mastering chain, more commonly known as a True Peak Limiter.
True peak limiting means that at the very moment where a sample has a value superior to the threshold of a limiter, it will be caught by it. You can think of it as a compressor with an attack and an RMS window equal to zero.
Now, True Peak Limiters also have an infinite ratio, which is also a very important point; an audio sample cannot exceed the threshold. Think of it as a kind of warranty implied by a mastering limiter. If you have set the threshold to -1 dBTP, the audio signal will never go beyond this value. This is why a limiter could also be seen as a saturation processor, because it hard clips the input signal. Theoretically, you could replace a limiter by a clipper to get the same kind of warranty, but the sonic results will most probably be problematic and quite undesirable.
Let’s look at what happens to very simple signals when we send them through a limiter. We will first look at the spectrum analysis of a sine wave at a frequency of 440 Hz, with the FLUX:: Elixir limiter on and off. The oscillator generates the tone at a -6 dBTP level, and Elixir has its threshold set at -9 dBTP. So, we should see a perfect -3 dB of gain reduction when Elixir is on.
Maybe you are wondering if there is any difference between the two previous pictures. And yes, there is a small 3 dB difference between the two peaks, which means that we did not add any saturation in the process.
Now, to make it a bit closer to sound we will have to handle with limiters, let’s modulate the amplitude of the sine wave. For this we have simply added a tremolo with a frequency of 4 Hz. It goes from unity gain to -inf dB.
Now, we can see that there are some additional frequencies here. These are added by the fact that the limiter is engaged and disengaged by the amplitude modulation, and its envelope adds some harmonic distortion. What should be kept in mind is that the harder the peak is, the more saturation will be added.
If you use them right, you could get a light version (none True Peak) of Elixir using FLUX:: Alchemist, Solera or Pure Compressor plugins. engage the infinite ratio option, set the delay to the same value as the attack, then play with the release and hold time to get the desired result.
Smoothing the clipping
To prevent and reduce any distortion added by a limiter, we use the envelope very much like a compressor.
Didn’t we say that a limiter has an attack of 0 ms? Well, not exactly. What we want to be sure of is that no sample can exceed the threshold. Using an attack of 0 ms is a solution but it also generates additional saturation that we want to avoid. So what could we do about it ? This is where lookahead comes in handy.
A lookahead, as the name implies, allows the algorithm to look ahead, before the signal. So, if we know in advance when the signal will pass the threshold, we could then manage to open the envelope before that happens.
Remember, because it is still, unfortunately, impossible to go back in time, lookahead will add latency to the signal.
Another way to understand it is to look at a block diagram of a limiter.
There is a detection circuit that will tell when the signal passes the threshold. In a traditional compressor this moment will trigger the envelope applied by the processing block. So, in this regard, the processing in a limiter is always kind of late. Now, the lookahead is a simple delay at the input of the processing stage.
This attack time allows for a softer clipping of the signal. It is not often seen as a parameter on the user parameter, but almost all modern True Peak limiters have this hidden under the hood. In Elixir, the attack time also depends on the input signal, to achieve a more musical result.
The release time is more straightforward to understand than the attack time, as it is the same thing as in a compressor. The release time is the time for a limiter to completely stop processing the signal once the signal goes back below the threshold. It has a strong impact on the quality of a limiter.
As for the attack time, the release in Elixir is dependent on the input signal.
Is True-Peak really True-Peak ?
A True-Peak limiter will always guarantee that you never exceed its threshold. At least, as long as you never do any kind of sample rate conversion after the processing!
Remember, in a digital audio workstation (DAW), we work with a digital representation of sound. To do so, we have sampled the audio signal at a certain sample rate (44.1 kHz, 48 kHz, 96 kHz, 192 kHz, etc.). So it is possible that the original signal had, between two samples, a value of higher value. After a resampling, this value may appear and generate a value above the threshold of the limiter. This phenomenon is known as intersample peak.
To prevent this effect, many limiters use oversampling. It is often hidden under the name intersample peak detection. Using oversampling will increase the resolution of the limiter and prevent intersample peaks from passing through. In Elixir, the oversampling only happens in the detection algorithm, while always being in sync with the processing algorithm.
A limiter is a dynamics processor. It bridges the gap between compression and saturation. A mastering grade limiter is characterized by an infinite ratio and a true-peak detection. This guarantees that a signal will never exceed the threshold of the limiter.
Because a limiter has a very strong behavior in regard to the input signal, it can generate distortion. To prevent it as much as possible, plug-in constructors use a complex envelope strategy involving looking ahead with attack time, and often give the user a way to control the release time.
Alas, oversampling is often used in limiters to prevent intersample peaks from passing through the limiter.
Next article in this series
Part 3 – Advanced processing and Dolby Atmos mastering
The post How to Use a Limiter, Part 2 – Limiter Theory – Knowing your tools appeared first on FLUX:: Immersive.Read More
Limiter processing is one of the hot topics on the internet about sound processing. Its close relation with mastering and loudness leveling makes it an unmissable tool for sound and music production.
In this first article, in a series of three, we will have a very basic look at limiters to help the less experienced to sort out what’s going on. In the next article we will go much deeper, so stay tuned!
If you are in a hurry, skip down to the TL;DR section of this article to get a summary, as well as a video explanation of the subject.
What is a True-Peak limiter, and are there different types of limiters?
The most common definition of a limiter, is to consider a limiter being a compressor with a ratio superior to 10:1. During this series of articles, we will assess that what we call a limiter is a True-Peak limiter, which has more constraints than the previously mentioned type of limiter.
The limiters we are interested in here, the true-peak limiters, are dynamic tools specifically designed to reduce audio crest, very much like a safety guard preventing any clipping from the digital to analog stage. Thanks to the peak reduction it is possible to use such a limiter to increase the loudness of the input signal.
Why the need for limiting?
The very last process of music production is to set the overall level, or loudness, of a mix. Limiting is the only safe way to amplify the loudness of a mix, but it comes at a cost.
A limiter will cut the crest of the signal to create some headroom to allow amplifying the rest of the signal. The cost of limiting is a loss of dynamic and an additional distortion.
The true-peak level is the actual level of the samples, or of the waveform if you prefer. The loudness is closer to our sound perception and smooth out quick sound variation because they do not matter that much in our perception of sound loudness.
Using a limiter also provides the guarantee to never clip the digital-to-analogue stage. This is a very important safeguard and explains why there is always a limiter at the end of the chain, even if it doesn’t do much.
Limiting will always come at the cost of more saturation added to the signal, but in a very much more transparent way than just cranking the output gain and clip the converters. Clipping the converter is considered as a technical error (even if some popular master does clips at the converting stage).
When is there a need for limiting ?
If you want to make a mix louder without clipping the digital-to-analogue converter way beyond the red, you will need a limiter. The limiter will reduce the peak and provide you with headroom to amplify the whole gain without clipping the output stage.
Limiting is also often needed to conform a mix to certain norms. For example, most music streaming platforms will refuse a mix with a true-peak level higher than -1 dBTP.
Is limiting mandatory ?
Limiting is mandatory in the sense that a mix should never exceed 0 dBFS. So using a limiter with a threshold at 0 dBFS will always prevent that from happening. Most of the time, the different target platforms (streaming, broadcast, etc.) ask for mixes that do not exceed -1 dBFS.
Increasing the loudness of a mix is never mandatory. Maybe we will create a debate here, but loudness in music production is very much an aesthetic decision. So, to continue with these controversial topics; a louder mix does not sound better than a quiet one. Actually, it sounds less dynamic and more distorted. Also, there are no norms in music diffusion. Each and every platform has its way to handle the loudness of submitted audio files.
For example, we often encounter the idea that a good deliverable should have a loudness of -14 LUFS-I with a true peak never exceeding -1 dBTP . This value comes mainly from the Spotify guidelines. But, it is not entirely exact, as Spotify offers different loudness targets for their customers. There is a loud (-11 LUFS-I), normal (-14 LUFS-I) and quiet mode (-19 LUFS-I). Apple has recently moved from -16 LUFS-I to -18 LUFS-I and before 2022, YouTube normalized loudness at -12 LUFS-I (now -14 LUFS-I). So which one should we choose? The common consensus is around -14 LUFS-I because it covers the biggest user base.
Then, what happens with a file that is above the target? If a file is submitted with a loudness target above the recommendation of the platform, the file volume will be simply dropped by the number of dB necessary to match the recommendation. So the process is transparent to what you’ve mixed and mastered.
If a file is lower than the target, most of the streaming services do nothing about it. The file will simply be quieter than the other one. YouTube used to be an exception before 2022, but Spotify in loud mode will limit the content to match the -11 LUFS-I target.
So how do we handle this mess? It seems there are three possible solutions.
Actually, the last point is the one defended by the author. Loudness and more importantly dynamic range is not only a technical thing, it is also an aesthetic choice. Some genres of music have built their aesthetic on very compressed and saturated sound, where others want to have all the accessible dynamics.
As a general guide, we will simply assume that it is a best practice to never exceed -1 dBTP. Also, it is preferred to have the loudest peak of a program, or a song, to always hit this -1 dBTP target.
Mastering limiters are designed to reduce the crest of a mix and allow it to increase its loudness. Their true-peak characteristic allows them to never let a sample cross the threshold.
Limiting should always be used to prevent a mix from clipping the digital-to-analogue converter. However, due to the many different loudness targets found in the streaming services, it is difficult, if not impossible, to “go-to” recommendation as for the loudness of a track. It seems to be more an aesthetic choice than a technical one, at least, in the music industry.
dB ? dBFS ? LUFS ? What is it all about ?
There are quite a few acronyms and concepts to explain around sound pressure level and how it is measured. Because sound is a mechanical wave, the primary way to measure the sound pressure level is to measure how the pressure evolves in a space.
First, the relation between sound pressure, and how we experience the sound level, is not linear. For example, when the sound pressure is doubled, we do not perceive a sound twice as loud. In fact, to have a sense for a sound being twice as loud, we need to multiply the pressure by ten. This is why we express the sound pressure level in decibels, which is a logarithmic scale that is much closer to our perception. When the sound pressure is doubled, there is a gain of +3 dB. When the pressure is multiplied by ten, there is a gain of +10 dB.
Depending on the field of interest, there are many different units built around the decibel scale. The one that is used in digital sound is the dBFS, or decibel full scale. In the digital domain, sound is represented by samples, whose amplitude can take absolute values between 0 and 1. The number of actual values that a sample can take between 0 and 1 is defined by the quantification (16 bits, 24 bits, etc.). But this is a linear scale, and thus, it does not correspond to our perception of sound. The dBFS solves this problem. A value of 1 in linear corresponds to 0 dBFS, a value of 0 in linear corresponds to -inf in dBFS (-96 dB at 16 bits, -144 at 24 bits, etc.).
Now that we have a scale that behaves closely to our perception, we need to find a way to measure sound loudness. Here, a peak measurement (the value of each actual sample in digital sound) is not a good candidate, because fast sound variation in volume does not matter that much in how we perceive loudness. Also, the frequency has a strong impact on how loud a sound seems. This is why engineers have proposed the loudness unit.
There are different time windows for the loudness measurement : momentary, short-term, long and integrated, which correspond to the following citation from the EBU Tech 3341:
1. The momentary loudness uses a sliding rectangular time window of length 0.4 s. The measurement is not gated.
2. The short-term loudness uses a sliding rectangular time window of length 3 s. The measurement is not gated. The update rate for ‘live meters’ shall be at least 10 Hz.
3. The integrated loudness uses gating as described in ITU-R BS.1770-4. The update rate for ‘live meters’ shall be at least 1 Hz.
In the music industry, it is the integrated value that is used as a reference for streaming services.
Next article in this series
Part 2 – Limiter Theory – Knowing your tools
The post How to Use a Limiter, Part 1 – True Peak limiting and Loudness processing appeared first on FLUX:: Immersive.Read More
Les 22 et 23 novembre l’équipe de FLUX:: est heureuse de vous retrouver au salon JTSE à Paris. Au-delà de notre stand, où nous présenterons les derniers développements du moteur audio immersif SPAT Revolution et notre prochaine mise à jour du FLUX Analyzer, des démonstrations quotidiennes seront disponibles dans une configuration studio spécialement conçue pour l’occasion.
Courtes présentations quotidiennes
Mardi 11:30, 14:30, 16:30, 18:30
Mercredi 10h30, 13h30, 15h30
La 26ème édition des JTSE se tiendra les 22 novembre (de 9h30 à 20h) et 23 novembre (de 9h30 à 18h) aux Docks de Paris Porte de la Chapelle avec une vaste gamme d’exposants dans les différents domaines de l’industrie : rigging, éclairage, audio, scénique, supports, sièges, etc.
Votre passe visiteur gratuit est disponible ici
The post FLUX:: Immersive vous attend pour la 26ème édition du JTSE à Paris appeared first on FLUX:: Immersive.Read More
November 22nd and 23rd the FLUX:: Immersive team are happy to meet with you at the JTSE show in Paris. Beyond our booth, where we will present the latest developments of SPAT Revolution Immersive Audio Engine and our upcoming FLUX:: Analyzer update, daily demonstrations are available in the nearby studio set up specifically for the occasion.
Daily short presentations:
Tuesday 11:30, 14;30, 16:30, 18:30
Wednesday 10:30, 13:30, 15:30
The 26th edition of JTSE will be held November 22nd (from 9.30 a.m. to 8 p.m.) and November 23rd (from 9.30 a.m. to 6 p.m.) at the Docks of Paris Porte de la Chapelle (Subway “Porte de la Chapelle”) with a vast range of exhibitors in the various fields of the industry including; rigging, lighting, audio, scenic, fabrics, stands, seats, and more.
Free visitor pass available here
The post FLUX:: Immersive welcome you all to the 26th Edition of JTSE in Paris appeared first on FLUX:: Immersive.Read More
This follows a generic article on Delay and Compensation mechanism,
As mentioned in different articles, when using audio devices to route to/from SPAT Revolution, Pro Tools is handling the needed delay compensation based on your routing / plugin usage. That being said, when you are extracting the audio from the SPAT plugin Local Audio Path (LAP), the delay compensation is not taken into consideration in regards to all the other objects you have in the session. In the case of Pro Tools, the delay compensation happens down the line between these tracks and the final bus they feed.
The problem is simple. You are inserting a SPAT send plugin in LAP mode on a strip after other plugins that may introduce latency. To avoid this, the SPAT Revolution 22.9 update includes a solution to allow the user to report the latency of the strip (in the plugin) and then have SPAT Revolution do the required latency compensation.
Time for a little operation – Delay compensation mechanism in SPAT Revolution!
In the latest plugin interface, a new delay Input delay field is available in the SPAT Send plugin. It provides the ability to report the latency of this audio object in samples. Once declared, when these audio sources will be connected in SPAT Revolution, a delay compensation mechanism will apply the delay needed to each of the object input in SPAT Revolution to ensure that they are being aligned as they would be within a typical DAW Routing.Reporting Input delay in SPAT Send, SPAT Revolution delay compensating all other objects SPAT Send plugin, SPAT Revolution Input Delay
Although this operation is manual, it simply means reporting the delay information of the track itself to the field dedicated to it in the plugin. Below are three (3) SPAT object aux tracks,, one with a delay occurred because a plugin causing latency. You can simply open the user interface and report the track delay,
Need more information on Pro Tools integration? It can be found in the Pro Tools section of the SPAT Revolution User Guide.
The post Reporting latency for delay compensation in SPAT Revolution appeared first on FLUX:: Immersive.Read More
In a Previous Tech Article, we’ve covered the basics of integrating SPAT Revolution into the Pro Tools environment. At the base was the use of the SPAT Revolution send plugin and the Local Audio Path (LAP) mode to route your audio to your SPAT Revolution rendering engine.
As the session and routing requirements grow, a few things must be kept in mind. Keeping things organized in the session and adopting an object-based workflow becomes ideal.
Many advantages come by adopting an object track-based workflow and taking advantage of the routing folder (Pro Tools 2020.3 and above) and the newest routing possibilities if you have the later Pro Tools 2022.09 (New AUX I/O). An article on Pro Tools 2022.9 AUX I/O covers this part of the subject.
Below are two examples of routing, the first one using the Local Audio Path mechanism, the second using actual audio devices.
The newest templates in the Pro Tools section of the SPAT Revolution User Guide are adopting exactly this workflow. Both using external audio bridging solutions or the Local Audio Path (LAP) feature of the SPAT Send plugin.Pro Tools Routing folder as your SPAT Objects routed to your Bridge to SPAT
As mentioned above, the routing folder track is at the base an aux track. They come with the advantage that they are a folder so they keep things organized. When creating a routing folder, it does a few things for you. It creates the aux patched to a specific bus for input. In the above, this is the « SPAT Mono Object 1 » audio bus that you can route any audio track to. The key takeaway is that once done, the only thing left is to route this audio object routing folder to the desired audio output or use the SPAT send plugin with Local Audio Path on it.Moving your audio tracks to the routing folders
If you have audio tracks that you want to send to a routing folder, you can simply right-click on that actual track and choose the Move to function.
You can then move to any previously created routing folders or create one on the fly.
Moving an audio track to an existing routing folder or creating a new one
Rapidly creating a routing folder from 4 audio tracks in Pro Tool
This is of course possible with a group of audio tracks as well, that you for example would want to “declare” as a single audio object to SPAT Revolution. The last step to assign it to the SPAT Send plugin, and either activate the local audio path (LAP), or patch the output to the desired audio bridge output.
For example,on the above animation , you have 4 audio tracks to declare as an object. Select them, right-click and choose “move to new folder” (or simply press the Shift+Command+Option+N on a Mac or Shift+Control+Alt+N on Windows) with routing. Give it a folder name (such as your object name) and you are done.Instantiating the SPAT Send plugin on the Routing folder
The last step is to insert the SPAT Send plugin into the actual routing folder(s). This will first allow you to automate all the SPAT Revolution source parameters to Pro Tools for writing your automation. You can simply enable the mode in the plugin interface if you want to use the Local Audio Path (LAP) option rather then actual audio I/O devices to route to SPAT Revolution. You can always hold the CTRL key before to enable it to do it to all instances of the plugin into your sessionInserting the SPAT Send plugin and optional opening the local audio path (LAP) mode Routing using Local Audio Path (LAP)
If you plan to use the Local Audio Path (LAP) function to route the audio to SPAT Revolution, you have to make sure to follow the proper routing of those tracks to maintain good sync.
This routing is to have all objects routed to a SPATSync bus (namely a dummy bus). This is explained in the Pro Tools intregration to SPAT Revolution article and throughout the Pro Tools section of the SPAT Revolution User Guide.
Ideally, you use the provided templates as a start point to understand this important routing well. It involves routing all your SPAT Objects to a single SPATSync bus and making sure the SPAT Revolution Renders return track(s) in Pro Tools, using the SPAT Return plugin, are patched to this bus as an input. With that, a good sync is well kept.
The post Using Pro Tools routing folders with SPAT Revolution appeared first on FLUX:: Immersive.Read More
This article will examine the basics of using SPAT Revolution with a Pro Tools workstation, which has been updated using the most recent features of Pro Tools.
Ultimately what we have going is either:
In each case, we are looking at using SPAT Revolution with a Pro Tools environment to render, highlighting the need for audio routing and good practice to maintain synchronization.
Various workflows are possible:
In some workflows, we need to make sure that the latency (produced by some processing plugins) on the audio track /objects sending to SPAT Revolution are properly compensated. While latency is well handled by Pro Tools when routing to actual audio devices, when using the FLUX:: Local Audio Path mechanism (LAP) some use cases may require specific attention to delay compensation. This generic article on Delay and Compensation mechanism in SPAT Revolution covers the basics, while this specific article Reporting delay for compensation in Pro Tools goes into the details of it.Simplicity with the SPAT plugin and Local Audio Path (LAP)
*Single Computer workstation using SPAT Send & LAP*
Dealing with integrating Pro Tools to SPAT Revolution can be as simple as adding the SPAT Send plugin to multiple audio tracks that are becoming your SPAT source objects. From this point, a multichannel track or aux input is added where the SPAT return plugin handles returning our SPAT Revolution rendering to Pro Tools. This render can be a 7.1.2 bed, a larger HOA 3rd-order scene, or a binaural mix from SPAT Revolution. You can as well consider doing simultaneous renders with the “multi-room” environment.Single format basic example of routing when using Local Audio Path Multiple simultaneous render using Local Audio Path
One of the key takeaways from the above pictures is that the audio track outputs (where the SPAT plugin is extracting the actual audio) need to be routed to a common bus. Same applies for the return render(s), they must all have that common bus as an input. More on this below.
Thanks to the Local Audio Path (LAP) feature of the plugin, it provides a simple solution for the audio integration between both SPAT Revolution and Pro Tools applications as well as the metadata of the SPAT object, passing to Pro Tools all parameters for automation. Once you enable the audio path, the Pro Tools source simply appears in SPAT Revolution and is ready to be connected in the SPAT Revolution environment.
The FLUX:: Local AudioPipe (LAP) technology, residing inside the SPAT plugin suite, is used to extract and declare your audio elements (audio tracks/buses becoming objects) to/from the object-based mixing/rendering SPAT Revolution application.
This extraction, happening from the plugin insert, can happen:
While someone may be tempted to simply extract on the audio track, this comes with some caveats:
One good way to deal with the pre-fader reality is the of use aux tracks to keep the object-based workflow organized and routed in Pro Tools. While it means an extra layer in a DAW session, the use of aux track can do the tricks to counter off the problem of pre-fader insert. They are basically becoming the audio objects that are extracted and declared to SPAT Revolution. This means as well that an audio object may not only be a single audio track element but multiple tracks that may play at the same or different time, a sum of multiple audio elementsAdopting an Object oriented workflow and using aux / routing folders to do so
Thanks to routing folder tracks in Pro Tools(Pro Tools 2020.3 and above) , which are ultimately aux tracks, we can use a nesting system to keep our large sessions organized. All our audio elements remain under an object folder and we use the auto routing capabilities of these folders reducing the patching/management steps. The following article, Using Pro Tools routing folders with SPAT Revolution dives on this conversation.
To ensure proper synchronization for this integration, you have to make sure to follow the proper routing of those tracks to maintain good sync. We often refer to this common bus as the SPATSync bus, something that we referred to in the past as the “Dummy bus.”
It involves routing all your SPAT Objects to a single SPATSync bus and making sure the SPAT Revolution return track(s) in Pro Tools, hosting the SPAT Return plugin, are patched to this bus as an input.
This is explained in the Pro Tools section of the SPAT Revolution User Guide. Ideally, you use the provided templates as a start point to understand this important routing well. With that, a good sync is well kept
SPATSync busAudio Bridging solution Single computer inline workflow with audio bridge solution
A second way to deal with single computer integration is to rely on an audio bridge device in between the applications. This is most commonly seen in macOS environments. If using this audio bridge device to send to SPAT Revolution and using a different audio output device in SPAT Revolution (for monitoring, delivering to a audio system or sending the render into another application), this solution can work.
Thanks to the recently added support for separate input and output audio devices in SPAT Revolution, you can now use audio bridging for input while using your audio interface for SPAT Revolution output. It comes to highlight one challenge if you need to return the render to Pro Tools for bouncing / monitoring. As when the Pro Tools playback engine is set to this audio bridge device, although it provides an I/O solution to SPAT Revolution you have no way to actually send your monitoring output to an audio device. To the rescue here is the use of aggregate devices available in macOS or with some specific drivers in Windows.
From the example above, you can then route the SPAT Object on channel (from 1-64) of your aggregate device (the BlackHole 64 channel device for example) while being able to return from SPAT Revolution on some of these 64 channels but being capable to use channel 65-80 to route your monitoring buses. (In the above case to a Merging Technology AES67 driver)Single computer insert workflow with Audio Bridge or AUX I/O Routing
Thanks to Pro Tools 2022.9 new AUX I/O system, this is simplified and new routing options are possible. Use cases includes when working with and HDx system as a playback engine and overall to simplify dealing with many audio devices. The article Pro Tools 2022.9 AUX I/O covers this part of the subject.
The last part of this integration is to configure the OSC connection in SPAT Revolution. This is already pre-configured in the SPAT plugin suite when the LAP is not enabled. More on the subject on the OSC Connectivy section below.Dual Computer workstation Dual computer inline workflow with AoIP Routing
Recommended for larger studio session projects (with many plugin processing or with video track) or for real-time/live production ,the dual computer scenario is simply to have a computer for the Pro Tools playback while the second dedicated computer handles SPAT Revolution real-time rendering. The mechanism for routing in this case relies on an high-channel-count audio interface such as what is possible with AoIP (AVB, AES67, Dante), MADI, Soundgrid and the likes.
With such audio routing, you can as well adopt an insert workflow and return the renders from SPAT to Pro Tools for bouncing and monitoring distribution.Dual computer insert workflow with AoIP Routing OSC Connectivity SPAT Send and SPAT OSC connection using a network
While the audio is handled by a high channel-count audio interface , the use of the SPAT plugin suite still applies. This means as well that you can move from a single computer to a dual computer workflow easily. By default, when the plugins aren’t using the FLUX:: LAP feature, they are sending/receiving the automation via network using OSC commands to SPAT Revolution.
By default, they use the local loop address 127.0.0.1 which is what we need for a single computer configuration and when using an audio bridging solution. In the case of dual computers, you simply have the bidirectional message transit on the network interfaces of a common network of both computers. In some use cases when using a virtual sound card (such as DVS) on the network, we recommend having 2 network interfaces, one for the audio, the second for the control.Read More
Routing audio between applications and tools, such as with external renderers and workstations, can be challenging to users depending on their configuration. Thanks to Pro Tools 2022.9, new I/O routing options are possible with the AUX I/O feature.
macOS users have come to rely on Core Audio aggregate devices to solve this challenge. The audio device creation allows them to aggregate multiple audio interfaces (physical or virtual) and use them as a single entity. That said, what about the scenario of HDX based playback hardware that can’t be aggregated and that needs to be used as the Pro Tools playback engine? What about which audio bridge solution to use reliably between applications? What about preventing possible digital audio loops that are prone to happen on patch errors using an audio bridge?
The new proposed feature by Avid Technology addresses those questions.New Pro Tools Audio bridge
At the heart of AUX I/O is the Pro Tools Audio bridge. These new virtual core audio devices now get installed with Pro Tools 2022.9. Together with AUX I/O, it offers flexible and simultaneous audio routing in and out of Pro Tools.
Audio bridge configurations in 2, 6, 16, 32 and 64 channel versions are available for flexibility and separation when routing to various applications simultaneously on the same computer.
You can use these new devices in AUX I/O setup, Audio Midi Setup, Sound and Preferences, and input and outputs for other applications such as the SPAT Revolution rendering engine.Routing to SPAT Revolution with AUX IO
The steps to getting this done are pretty simple; access the Aux I/O section of the I/O Setup, select In device, select Out device, and you are set. These new inputs will become apparent to the Input section, the same goes for the output. You can take the time to give them a unique name, such as “Bridge to/from SPAT Revolution” as an example.Creating a bridge for inputs and outputs to SPAT Revolution
You simply need to choose these virtual audio interfaces in the SPAT Revolution Hardware I/O set up to complete this configuration.Pro Tools Audio Bridge 64 as input, and 32 as the output of SPAT Revolution Hardware IO Pro Tools versions and AUX I/O capabilities.
Pro Tools Ultimate has unlimited I/O capabilities, where Pro Tools Studio gets unlimited inputs and only 32 outputs (which would be the maximum object you can send to SPAT Revolution to render). Be reminded that Pro Tools installed virtual audio bridge device receives a maximum of 64 channels. Other solutions need to be used for a higher channel count.Structure for your Pro Tools session and your audio objects.
Last but not least is your Pro Tools session and how you are taking advantage of the newly created audio routes in Pro Tools. Thanks to routing folders introduced in 2020.3, There is a very nice way to keep the object-based workflow organized and routed in Pro Tools. Session templates using this mechanism are available inside the Pro Tools section of the SPAT Revolution User Guide. Furthermore, the following article, Using Pro Tools routing folders with SPAT Revolution dives on this conversation.Pro Tools Routing folder as your SPAT Objects routed to your Bridge to SPAT
The post Pro Tools 2022.9 AUX I/O routing with SPAT Revolution appeared first on FLUX:: Immersive.Read More
La Fabuleuse Histoire d’un Royaume, the unique historical theatrical extravaganza production on the creation and evolution of the Saguenay Lac-St-Jean in Quebec, Canada, celebrates their 35 years in show with an immersive upgrade of their sound system.
The show, seen by more than 1 300 000 spectators and with nothing less than 150 volunteer actors who take the stage representing up to 12 roles each, is a spectacular production featuring fire, water, helicopters, cannon shots, horses and other animals in an unprecedented sound and visual performance, taking place at the Théâtre du Palais Municipal in La Baie, Saguenay, the largest stage in Quebec with a capacity of up to 2300 spectators.
Deployed by LSM Ambiocréateurs, one of the most influential creators of scenic atmosphere in eastern Quebec, a new Immersive live sound system driven by the SPAT Revolution spatial audio mixing software, using Wave Field Synthesis (WFS) advance reproduction technique, was installed. The installation empowers the sound designers with the tools required to create and mix an outstanding world-class immersive experience, bringing this reputed legendary theatrical show to the next level.Théâtre du Palais Municipal in La Baie, Saguenay
In 2019, the entire production team headed by Jimmy Doucet, the new show director, wondered what could bring La Fabuleuse to another level to celebrate its 35th anniversary. The complete renewal of the soundtrack and how it was diffused seemed obvious. From then on, a monumental work of analysis, meeting, collaboration, research and then recordings, editing in the studio and in the venue, assembly, calibration and programming was put in place throughout these three years to achieve the result we know today.
“This new immersive system gives an incredible depth to our show. The music, sounds, and lyrics now come from where the action actually is taking place on our very large stage, which gives a much better understanding of our many tableaux. In several parts of the show, the system gives the spectators the impression of being transported on the stage with the actors, and with that becoming a part of the action.”, said Jimmy Doucet.
Serge Lachance, CEO of LSM Ambiocréateurs, Sound Designer, Mixing Engineer, and Operator of the show, took the La Fabuleuse project under his wings. First as a technical equipment supplier (sound, lighting, video, rigging) for 14 years, and as a sound designer and operator for the show during the last 7 years.
The show runs through a Midas Pro 6 console with an extended loudspeaker configuration from CODA Audio installed in the theater, and previously the show was set on a standard 5.1 system controlled by the surround panning on the very same Midas console, with only 24 channels of playback. The upgrade, and the installation of the new immersive sound system, now with a new soundtrack and over 200 audio tracks introduced significant challenges due to really short deadlines, and with the supply challenges of the last year (due to covid), still the installation was presenting a surprisingly pleasant end result.
“I am very happy with the result, and my biggest surprise was, and still is, the stability of the system overall”, said Serge Lachance.
“The main thing is intelligibility. La Fabuleuse is a story and you’ve got to be listening to what’s going on, and there is a lot of vocals. There is the spirit voice that delivers the story all the time, and then you’ve got the actors that are coming into play. There are so many actors and voices, so as you can imagine, with the limited resolution of a stereo or L-C-R system, you can rapidly lose that intelligibility when you cramb all this content together. If you make the choice of putting in more music elements to drive your audience, you stand the chance of losing intelligibility too. Sometimes there’s a compromise there, because you need the music to drive the dance and drive the audience. So, intelligibility and being able to get that localization, that was definitely the most critical factor”, Lachance added.La Fabuleuse FOH
The tracks for the vocal, music and sound effects were handled in Logic Pro on Mac Mini M1s. Two Mac studios (for redundancy) are running the two SPAT Revolution immersive processors for the sound mixing environment and the WFS (wave field synthesis) reproduction; with the show control running on Qlab with MIDI and OSC commands. Having the ability to use the sound field creatively for localization of voices and other sound objects in the space, has created a whole new sound experience vastly improving the experience of the entire performance.
“There are also scenes of the show where there’s a war and people are flying by and they’re falling from the ceiling and there’s pyro, and there’s a chopper flying over, and now the audio is following all of that. So, it’s just not the same show on a 5.1 with all its limitations. The availability of the WFS reproduction technology was key to me in this project. In addition, the technical support, the services and solutions provided by Hugo Larin, director at FLUX:: Immersive, were crucial for my decisions.”, said Lachance.
« The WFS reproduction technology in SPAT Revolution was key to me in this project »
Serge Lachance, Sound Designer
FLUX:: Immersive was contracted to provide the various services for the success of this project. Hugo Larin was deployed to support the production with this redesign, its implementation, from the use of the SPAT Revolution software engine to show control/network integration and system calibration.
“The La Fabuleuse presented a great challenge but at the same time the perfect example of how an Immersive deployment could work, even with this 35-meter wide stage. The goal was to provide strong localization and a sense of depth in the audio mix and our Wave Field Synthesis (WFS) implementation in SPAT Revolution did exactly this”, said Hugo Larin.Scene from La Fabuleuse with live real horses
“The show has now existed for 36 years, several updates have been made since the beginning, the last one being the addition of video projection on this huge set. (more than 35 meters wide) . The video completely changed the experience by transporting the viewer to an immersive setting that reproduces the history of the Saguenay over 150 years. With the sound recordings dating back more than 30 years, it was impossible to match the sound experience with the visuals of this show. By respecting the texts of the author Ghislain Bouchard and the music of Dominic Laprise, we have completely redone the soundtrack of the show. With these new 200+ tracks the possibility of an immersive sound diffusion was made possible.”, said Serge Lachance.
“With an extremely tight schedule due to supply issues, the time left to do the integration and to actually mix these over 200 tracks on 64 audio objects was quite short. That being said, I was quite impressed with the work of Serge Lachance and how it pulled a mix quite rapidly. Now, the production has a system that they can keep building on as they develop the immersive audio mix“, Larin added.
“FLUX:: Immersive is a company that listens to users, having tried the product last summer and having made recommendations, some answers came rapidly and some optimization were implemented rapidly.The precision of this software is impressive and once again, the stability is there with the qualified hardware that gets recommended.“, Lachance added.Serge Lachance, LSM Ambiocréateurs
After working on the show for many years, it took Lachance some time to get into this expanded universe opening up with the new immersive system, and finally in the end he even started to experience the real world around him in a completely new way as well.
“Funny enough, I had times with Serge outside of the show where we were sitting and talking and he was listening to his environment, like the birds. We were talking outside and he was being bothered by birds. Mixing this way opened his ears so much to listening to new things. He became more sensitive to his surroundings. Now, he is mixing with a better sense of the surroundings of the show, such as when there’s birds in the show. He knows how to place those because he relates more to how they are in life.”, said Larin.
“His conclusion is, once you’ve opened your brain to this, when you’re thinking about show production, you’re thinking about immersive and there is just no coming back. You don’t want to be coming back. And he’s now involved with two or three other productions and he’s thinking about WFS for reproduction techniques and about content in 360 degrees – That is a powerful way to transport an audience!”, Larin added.
La Fabuleuse Histoire d’un Royaume / The Fabulous Story of a Kingdom
Mixing, Sound design – Serge Lachance
Audio, Video and Lighting Supplier; LSM Ambiocréateurs
Immersive Audio technologies and services: FLUX:: Immersive
Loudspeaker manufacturer / Distributor: CODA Audio / SC Media
Audio team Marc-André Gilbert, Félix Perron, Nathan Allard and Sam Plourde St-Pierre
Production director: Marie-Eve Rivard
Stage direction – Jimmy Doucet Production
Technical director: Isabeau Côté and Fabien Duchossoy
Music by composer Dominic and Mathieu Laprise
Sound FX by Nicolas Dallaire and Jean-François B. Sauvé
Diffusion Saguenay, Producer and Promoter of the show
Audio System Detail
Remote control: TouchOSC Custom control pages on hardwired Ipads
Show control & playback: Redundant Logic & QLab software on Two (2) Dante audio Mac Mini
Mixing desk: Midas Pro 6 with Klark Teknik DN9650 AES50 to DANTE bridges
Immersive processing: Redundant SPAT Revolution on Two (2) Dante audio Mac Studio Max
Audio / Control Network:
Signal Matrixing / Breakout and Fail over management: Auvitran AVBx7 Audio ToolBox
LTC distribution / routing: CM Labs Sixty Four 64
Master System WordClock: Rosendahl Nanosyncs HD Playback system generating LTC to video, lighting and pyrotechnics
A standalone QLab Mac mini system is handling rehearsals and serves as a completely analog / bounced show emergency system.
CODA Audio Diffusion/Loudspeaker System
Frontal co-linear system of 5 arrays
4 x APS per array , a hybrid of point source and line array.
Each arrays is flown with SCV-F sub-module
Main floor surround ring of 12 x G-Series Point Source speakers
Bleacher surround ring of 18 x HOPS-8, High Output Point Source Systems
Front fill line of 12 HOPS-5, High Output Point Source Systems
4 x SCP – Infra floor sub-woofers
56 channels of LINUS14D amplifiers in LINUS RACK
Photo Credits: Journal Le Quotidien (#2, #3), Marc-André Couture (#1, #4), Michel Tremblay (#5)
The post La Fabuleuse celebrates 35 years in show with a FLUX:: Immersive WFS audio upgrade appeared first on FLUX:: Immersive.Read More
Flux:: sound and picture development was founded in the 1990’s during the early days of digital audio software workstations, collaborating with Merging Technologies in the creation of Merging’s now well renowned products.
In 2007 Flux:: started releasing their own exquisite audio software product line tailored for demanding sound engineers, and has since then been focused on creating intuitive and technically innovative audio software tools, used by sound engineers and producers in the professional audio, broadcast, post production and mastering industry all over the world.