8D/Binaural Mixing & Mastering.
This is a new audio technology that is lately trending. It is specifically designed to be listened to with headphones. Aiming to give the listener an immersive experience of listening Music.
At Spatial Mastering, we have listened to hundreds of so-called “8D” audio tracks online and found that fundamentally they are the result of equalisation techniques, effects and panning combined together. The majority of the tracks we heard apply the afore-mentioned techniques to the master/stereo file.
At Spatial Mastering, we generate 3D audio by treating every channel individually instead of working on the master/stereo file only. As a result, you have a fully immersive 360° experience. There are no shortcuts to obtain the best possible results in the art of spatializing your work and we achieve maximum potential in 3D audio. We have been involved in 3D audio projects for the past 10 years. Below we explain the knowledge that we have acquired throughout the years and how we apply them to your work.
The process will accomplish an unambiguous sonic characteristic of digital mastering. After your work goes through the process of being digitally mastered at Spatial Mastering, your tracks will also sound enhanced in playback systems ranging from laptops, headphones up to club sound systems.
The mastering of your tracks will be processed in our state of art studios in London, crafted by our engineers with 40 years of combined experience in the field.
We use a correct balance between digital 3D Techniques and analogue hardware in 3D digital mastering. 3D Digital mastering will increase the perceived loudness in your tracks and we will take into consideration the loudness, spatialization and LUFS standards. Perhaps you prefer the focus on clarity, headroom, balance and give more punch to your tracks.
Each track has different characteristics and identity. We will master your track digitally, and apply on an individual basis as required; multi-band compression, subtractive equalisation, distortion and harmonic generation, frequency enhancement and equalisation, compression, stereo imaging, limiting and metering.
Digital mastering is a complex process that has involved a series of variables. With our experience, we can achieve the results that you expect and have your track ready to be streamed, gigged in live events, club sound systems and played across all the media players available on planet earth and elsewhere!
The Historic Evolution.
Throughout the development of the gramophone and phonograph, audio systems were mono audio. Subsequently, these developed from mono to stereo and then binaural stereo, cinema stereo, ambiphony, quadraphonic sound, ambisonics, ITU standard surround and 3D audio. 3D audio is not a new concept but was popularised in recent years primarily due to virtual reality, thereby merging the two worlds to create a fully immersive experience.
Head-Related Transfer Functions (HRTF).
This captures the alterations of sound waves generated from the source to our ears. Some of the alterations include diffraction and reflections on the parts of our bodies; head, pinnae, shoulders and torso. As a result, this produces the illusion of spatially located sound. HRTF is a Fourier transform of a head-related impulse response (HRIR). It is a complex function defined for each ear, having both information about the magnitude and the phase shift.
Binaural recording and reproduction are intended to mirror the human two-ear auditory system and reproduce sounds specifically for a listener’s two ears. It is intended to match the human two-ear auditory system and usually works with headphones or with loudspeakers. Binaural is an upgrade to stereo; the sound is captured in a simulated way as we listen. This can be captured with a headset with two microphones located in each earpiece, dummy head or with a head and torso simulator (HATS). The sounds received in two ears are distributed and moulded by the human head, torso and ear geometry ending in spatial cues for binaural hearing being made available.
This is not a new format and was developed in the 1970s. As a result of the development in AR and VR technology, in recent years there has been a growing interest in ambisonics in the spatial audio area. Ambisonic microphones capture audio in four channels including elevation, declination and give as a full spherical directionality. The four signals captured by ambisonics are; W, X, Y and Z. The W, X, Y and Z channels are named B-Format – First Order (4 channels). B Format has two standard protocols AmbiX and Fuma – they vary by the order in which the four channels are arranged.
Psychoacoustics and Playback Issues.
The complex interaction between human and acoustic waves starting with the peripheral auditory system and terminating with the data handling characteristics of cognition. The more knowledge we have within the field will lead us to do a better job in recreating virtual simulations of spatial audio. Perhaps, therefore, our job is not to simulate reality for the sake of accuracy but often simulating reality is the first step and from there we alter the content to make it more fun or entertaining.
Multichannel Sound Field.
Multichannel sound backdates to the 1930s. Today it is a standard format in the movie industry. A stereo mix presents a spectrum of positional information between two speakers providing the impression of the sound being positioned at any point between the two speakers. In reality, the sound is tied to a straight line within the speakers. This principle works well with headphones and when the listener is located in a sweet spot between the speakers. The further away the listener is from the speakers, the less effective the stereo spatialization becomes.
Spatialization in Surround.
Sound works in an identical principle although the spatial panning moves along multiple axes and gives the listener the sensation of sound coming from multiple sources. 5.1 or 7.1 give a similar feeling, but the sound remains locked to the line between the speakers. As a channel-based system, the listener can experience the sound moving backwards and forwards but the sound never leaves the line between the speakers and gets closer to the listener.
Dolby Atmos, Auro 3-D and DTX:X are more recent surround sound technologies, object-based where the listener has an immersive 3D spatial audio experience.
Decorrelation of an audio signal is a process that generates two or more incoherent signals from a single input signal, which has many applications in artificial auditory effects, such as broadening the apparent source width (ASW), enhancing the subjective envelopment, producing subjective diffusion in multichannel reproduction, etc. Something to take into consideration when we work with Z-axis speakers is the audio content that’s delivered to them. It should be decorrelated from the sound coming from the speakers on ear-level
To avoid the unnecessary increase of higher and higher channel counts, Dolby introduced the object. In theoretical terms, an object represents an audio channel, although instead of going into a specific channel the objective is free to be defined as being anywhere within 3D space at a precise X, Y and Z coordinate. The channel and its positional data are rendered as metadata and transmitted from the point of authoring to the Dolby Atmos decoder in the consumer playback environment. The decoder handles the metadata to know where the audio should be positioned in space, using an algorithm to decipher how to optimally route the audio to the available output channels that feed the appropriate speakers in an environment. Different objects can be applied to represent separated sounds in the mix; this is commonly referred to as object-based audio. One of the advantages of Dolby object-based is the fact that a mix created in a 5.1.2 environment can be played in 7.1.4 perhaps in an inverse order a mix created in a 25.1.8 can automatically be played back in a 5.1.2 setup.
Data from several studies suggest that on different occasions the use of a channel based approach may be preferred, even for Atmos delivery. When applicable each speaker position is configured as an object. For instance, in a 5.1 set up the centre channel is configured as an object and any sound intended for that position is routed to the channel feeding that object and will play in the front centre of the sphere.