See all posts

Comparing Panorama 7 virtualizer to Logic Pro's Atmos renderers

After releasing Panorama 7 (PN7), I've thought about creating other variations of Panorama. One possibility would be a surround virtualizer. A virtualizer takes a discrete surround input and outputs binaural audio for playback over headphones, allowing you mix surround formats without the need for a multichannel speaker setup.

I decided to mockup a 7.1.4 virtualizer by placing PN7 on individual tracks and panning each track to the corresponding speaker position. I compared this to the binaural monitoring formats available in Logic Pro's Atmos renderer, specifically the Dolby renderer and the Apple renderer (Standard profile).

I first recorded channel names that would serve as the audio content, just like a channel test. For the Atmos renderers I created an Atmos session with 12 mono tracks, imported the channel name clips, and set the panner on each channel to typical 7.1.4 locations: fronts at +/- 30 deg, center at 0 deg, surrounds at +/- 90 deg, rear surrounds at +/- 135 deg, top fronts at +/- 45 deg and 45 deg elev, top rears at +/- 135 deg and 45 deg elev. In this demo LFE was setup just like center. I rendered this session with different options for the Atmos renderer format.

The Dolby renderer has near, mid, and far options for each surround bed speaker. These differ in the amount of simulated room reverb: the near option sounds anechoic, the mid option has some reverb, and the far option has the most reverb. I'm not sure why these options are settable on a per channel basis, but I used the same setting for all channels. The Apple renderer has only one setting which has about the same amount of reverb as the Dolby mid option.

For the Panorama mock virtualizer, I created a stereo session with 12 mono tracks and inserted PN7 on each track, using Human HRIR, setting the spatial locations to the speaker locations using the built-in presets. For room reverb I started with the "Medium Room" setting for both reflections and reverb, which resulted in even more reverb than the Dolby far setting. Setting the same reverb parameters on each instance was tedious, having a way to lock multiple instances together would be helpful.

I decided to roughly match the amount of reverb in the Apple and Dolby mid renderers. I ended up using the medium room reflections, mixed at -12 dB, and instead of the PN7 late reverb, I used Convology XT as a send effect, set to the "Small Meeting Room" IR from the Real Spaces Rooms Library, mixed at -20 dB. I thought this sounded a little brighter than damping the PN7 late reverb to this small size.

Here are the resulting MP3 files, which should be listened to over headphones.

Apple Renderer

Dolby Mid Renderer

Panorama

For completeness, I've also included the near and far Dolby results:

Dolby Near Renderer

Dolby Far Renderer

Spatially they all sound pretty similar to me, obviously the location realism will depend on how well the HRTFs used in the rendering match your ears. These renderers all use a generic set of HRTFs. Apple offers a personalized spatial audio profile which will tailor a set of HRTFs to you based on images of your ears, and Apple also supports dynamic head tracking with certain headphone devices.

The Logic Atmos renderer uses a 7.1.2 surround bed. I think the top front and rear channels are mixed to the top mid location, hence there is no spatial difference between the top front and rear channels, at least I don't hear any. The Panorama rendering is better in this regard, depending on how well the listener can localize the top front and rear locations, which depends on the HRTFs.

The Dolby renderer is noticeably brighter than Panorama, particularly in the front channels. The PN7 HRTFs are diffuse-field equalized so they are flat in a diffuse-field sense, but most of the high frequencies come from lateral directions where there is no shadowing (one can hear the Panorama side surround channels are brighter than the fronts or rears). And the Dolby and Apple renderers may be applying additional EQ for headphones.

So what's the takeaway? Basically the limiting factor is the non-individualized HRTFs. If you start with a well chosen non-individualized set of HRTFs, equalize and implement them properly, and combine with a good sounding reverb, you will get a pretty decent sounding virtualizer. Results will differ somewhat based on the choices of HRTFs, equalization, and reverb parameters. Providing individualized HRTFs and low latency head tracking can further enhance the sense of realism.

Here's a zip of the 7.1.4 channel name clips I used:

Channel Name Clips

If you're interested in commenting, send a message to Bill Gardner via our support email or contact page.