[FEATURE] Spatial Audio
Copy-Paste from a conversation with Phil W. of the support team:
I'm working on a project where each of both talents has a wireless lav mic (better sound quality in the rooms we recorded than a stereo shotgun mic, which we used as backup). I had the idea to change the audio balance according to their movement in the shot. So if talent 1 is on the right, balance is biased to the right, and if she moves to the left bias shifts to the left. Same for Talent no. 2 - trying to create more immersion through sound. Since it is tedious to all keyframe it by hand for long videos, and having a deadline approaching, I thought of motion tracking them and parenting the tracks with the audio balance. But apparently this doesn't work as I hoped.
I had this whole idea because of 2 reasons:
1. We used the Rode Wireless Go 2 to record the sound. With this set your are able to split each lav mic to the left and the right channel, so that you can edit the sound of the 2 mics seperatly in post. While reviewing our footage, I noticed the emersion this creates when the talents are on the correct side, visually. It has somewhat a pleasing effect on the watching experience.
2. After that, I remembered that video games (and VR?) are making use of this type of sound effect. There are movies with Dolby Surround etc. that use this effect as well, but rather for environmental sound than dialog...
Since sound is 50% of a video, do you think a 2D or 3D sound feature could make its way into Hitfilm?
Conceptwise, there are 2 possibilities I could think of how to implement it:
1. For 2D or simple application, you should be able to use the visual tracking data of the X-axis of a single point track to automatically adjust the balance. And you would also need to be able to choose at which X value you hit the max. balance of either side. Also being able to determine the maximum shift (like max. 75% to the left) would be nice.
2. For 3D, or generally more complex application, you would need to be able to " virtual microphones" into the 3D plane and to place the sound source (or by default have the active virtual camera hooked up with it). Depending on the orientation and distance to the virtual mic, the "perceived" sound would change its properties (volume, balance, maybe even reverb, pitch, distortion or any other effect based on sound). Kind of like Doppler Shift on steroids. I wouldn't be surprised if environments or engines for creating video games are equiped with functionality similar to that.
Thanks Phil, for your Support and for encouriging posting this idea to the forum!