Using Reverb As A Composer With Minimal Engineering Expertise

Using Reverb As A Composer With Minimal Engineering Expertise

One of the trickiest parts of working with virtual instruments is using reverb to create a sense of space. Fortunately for me, a lot of my work is orchestral, so even though it is complicated, once I set it, I can more or less forget it.

There are several parts to this process - please note that the following is not exhaustive.

What is Reverb?

Reverb is a general term used to describe effects that process inputted sound, outputting reflections (sound bouncing off the walls and objects in a room) that help the listener to discern the nature of the space that the sound has been generated in.

Whether you use algorithmic or convolution reverb (see https://www.izotope.com/en/learn/what-digital-reverb-actually-does.html), the general goal is the same, to generate early reflections and late reflections and mix those in with the original dry signal to simulate real-life acoustics.

  • Early reflections (ER) are the first reflections to return to our ears after a sound has been made - the sound travels to all the surfaces of a space and gets reflected once.
  • Late reflections (Tail) are all reflections that occur after the ER - depending on the size of the room and the absorptive nature of its surfaces, a sound may be reflected many times, losing intensity with each reflection, until it finally dies away.

If you've ever stood in an empty church or a cavernous, empty room, you will be familiar with this Tail phenomenon - which is basically lots of echoes stacked on top of each other.

I will now share how I achieve this in my own setup - there are 3 steps:


1. Virtual Instrument (top right)

Sometimes, the virtual instrument will come with reverb or reverb-like settings. In this example, Samplemodeling's Violin Ensemble has a page that allows me to edit a number of parameters intended to influence the spatial sound of the instrument.

As you can see, there are early reflections, distance, and so on. The intensity of these and other similar parameters differ from library to library and it is best to treat every patch as an individual.

Previously, I used to leave the early reflections and distance on the low side because I planned to add those in with the next 2 steps, but I eventually realized that adding a little bit of each early in the process made the later steps more effective and reduced the need for heavy-handed settings.

I left the predelay and width untouched since I was already happy with those settings. You will see that the violins are already pre-panned to the left. Since that is where I have them in my setup, I also left that setting alone.


2. Virtual Soundstage (Left)

Next, on the bus that all my violin 1 instruments output to, I have Virtual Soundstage from Parallax Audio as an insert.

This plugin has a great GUI with different acoustic environments that you can actually see on the screen, and you can drag your loaded instrument anywhere on the screen to influence distance and panning and early reflections, making it very user friendly.

A useful parameter it has is Air Absorption, which aims to mimic the attenuation of various frequencies as a sound travels through air. High frequencies are more rapidly dissipated than low frequencies, so if an instrument is far away, it is not uniformly less loud across all frequencies - it tends to sound darker (high frequencies attenuated) as well.

Failing to account for this makes it much harder to simulate distance, which in turn makes it more likely for one to add an excessive amount of reverb to compensate.


3. Altiverb (Bottom Right)

I have been using Altiverb from Audio Ease as my primary reverb plugin for the better part of the last decade. It is a convolution reverb, which I prefer for the simulation of spaces that exist in the real world. I have it as a Send effect, which I send all my major section buses to (Woodwinds, Strings etc).

As you can see in the bottom right, I unchecked the "early" box to only use the tail end of the reverb as I have already applied sufficient early reflections either from the instrument or Virtual Soundstage.

The simulated space I've used is the Todd-AO scoring stage (https://ultraverse.fandom.com/wiki/Todd-AO_Scoring_Stage) which is a historic space that has been used to record scores for many of the most famous Hollywood movies.

Now, because Tails are made up of many reflections stacked up on each other, it can get a little sonically out of hand if the music is intense or contains a lot of notes.

For this reason, I pre-emptively EQ certain frequency bands (5-600hz, 1300hz, high pass filter etc.) that I know tend to be overrepresented in my pieces when I have to mix.

In addition, for dynamic pieces that may not be one way or another throughout the entire track, I automate the level of the wet signal to be most appropriate for every moment. If I have a quiet moment with few notes, I increase the reverb. If I have a very rhythmically dense moment where I'd like the articulation to be crisp, I reduce the reverb.

Here is Itzhak Perlman to explain why:

Conclusion

That's all. Thanks for reading!


Dan Benner

Bridging AI/ML complexity with human needs

1 年

Thanks for sharing, and nice to learn more about reverb on Linkedin! (4 of my 8 pedals are reverb, so safe to say I'm a reverb junkie.)

Mark Templin

Product Manager @ Game-Based Innovations Lab (GBIL)

1 年

The design of the sounds space is so important. Also, I would like to know what you think of preverb, as a side tangent ??

Super cool info. Thanks!

Cliff Pia

Creative Technologist | AI Integrated Storytelling & Event Direction | Content Creation | Exec Presentation Coach | Voice with Vox

1 年

I love that you took the time to explain your very cool approach to creating realistic reverbs for your virtual instruments. In my early days of being a sound designer, I had crazy one week where I was asked to record gregorian chants in a huge stone church where i placed a plethora of mics in so many different places it made my head spin (and was unnecessary since I ended up primarily using one main stereo mic set up and a touch of the other mcs for flavor). Then a few days later we recorded didgeridoos for Michael Harner, author of "The Way of the Shaman" for guided shamanic journeys record. That session was a trip. Then, I produced a Phil Collins-esque gated song for a band where we used a huge EMT plate reverb and a brand new Lexicon 224 digital reverb (just for the snare and tom toms;) and I ended up the week producing a film soundtrack with a 40 piece orchestra for a film that ended up getting nominated for an Academy Award. One of the instruments in the orchestra was a 400 year old Stradivarius cello - he only made 80 - and in each session it was ALL about capturing or creating the exact right reverb. Verb is such a science and an art. Sorry to go on and on but I love this shit, as do you, Xiao.an :)

要查看或添加评论,请登录

Xiao'an Li的更多文章

社区洞察

其他会员也浏览了