Developed as a submission for the 2020 AES/MatLab Plug-in Competition.
Update (17SEP20): This plugin was accepted as a finalist in the cometition.
This describes a VST plugin that takes a mono input and encodes it with the time and level cues of a user-defined stereo microphone array.
Find the releases of the plugin on GitHub.
The plugin will continue to be developed.
Excerpt from WIP documentation:
Audio localization in stereophonic recording focuses on balancing the pressure level and time difference cues that inform localization through interaural time differences (ITD) and interaural level differences (ILD). Within the stereophonic recording praxis, it is understood that the weighting of ITD and ILD cues contributes to varying desirable qualities within the resultant soundfield. Stereophonic microphone arrays are also chosen to account for the desired balance of direct sound and reflected sound and to accommodate the recording angle of the perceptual “soundstage” and translate it to audio reproduction systems with minimal angular distortion.
This is, however, at odds with typical stereophonic localization practice within the application of audio signal processors; which tend to prioritize localization solely through ILD cues. In certain contexts this can lead to a situation where recorded signals that are being mixed together during post-production can be localized using different perceptual locative cues, yet with the intent of producing a coherent, and qualitatively consistent perceptual soundstage.
By modelling the time and level relationships between a sound source and the microphones in a stereophonic recording array, a monophonically recorded sound can be localized within the perceptual soundstage using the time and level cues appropriate to the modeled array. This paper will outline an algorithm and discuss and implementation for abstracting these relationships in to a CPU-efficient model, and exposing the appropriate parameters to the user.