Extension thread from " An Open List of Console Feature Requests" Thread

David Luscombe

Sophomore
Mar 24, 2011
117
0
16
Features I would use pretty much every time I was in front of a mix console that wasn't a cold, "one off" mix:
  1. Delay panning: I've got a single guitar amp with one mic in front of it, and I'm supposed to make it "sound like the record," where it was double tracked, then reamped, and then all of that hard panned L and R. Give me the option to be able to place an input in both the L and R channels, and have the pan knob give me a range of delay (0 - 15ms would be enough).
  2. Faux Double: Give me ability similar to number one, but retain the level panning position, and give me double of the channel with an adjustable delay time. Throw in a little chorus or other delay modulation to make it feel more like a real double. Use liberally on vocals, acoustic instruments etc.
  3. Fader level remapping: Once a mix is up and running, with the compression doing its thing, I don't need 20dB fader rides. I need to be able to precisely and quickly make a 1-3dB fader ride. Give me a way to quickly change the range of a fader from "coarse" to "fine." A range of +5 to -10 would probably be enough, as long as the pulling the fader fully down still turned the channel volume to zero. Place fader in coarse mode, get basic mix, then press switch to fine, and the "coarse" level would become the new nominal zero. Fader would then snap to 0dB. Each channel would be hot switchable between modes by a single press.
  4. Dynamic high pass / low pass: Allow me to set a high pass or low pass on a channel that would change its corner frequency in response to the level of the input. Both raising or lowering the frequency would be allowed, and there would be peak, envelope following, and RMS detector circuits.
  5. Multiband expander: A pretty common studio tool, it would be helpful to have as a counterpart to a multiband compressor. As an example you could aggressively downward expand high frequency cymbal bleed on drums will only slightly downward expanding the body of the drum sound between hits. Upward expansion of parts of a reverb tail is also an effective tool for shaping ambience. You can also use side chain to key pieces of the mix to pop out as necessary.

I would combine the first two, for faux panning. I am sick and tired of going to concerts and only hearing half the band, or half the drum kit, or half the lead instruments. Last year I was row ten, four seats off center, and I got none of the cellist, and too much rhythm guitar. The show before that it was too much keys, and not enough guitar. Both shows the back up vocal favored the person on my side of the stage. It is ridiculous for the person mixing to be masturbating at the board, while only 5% of the audience can participate in his fantasy world.


I found these two post particularly interesting, In particular the delay panning and faux double and then Jacks comments.

I would be interested to know what techniques people use regarding these issues and what they do to combat what Jack is talking about when they mix.

Cheers Dave
 
Re: Extension thread from " An Open List of Console Feature Requests" Thread

I would be interested to know what techniques people use regarding these issues and what they do to combat what Jack is talking about when they mix.

Cheers Dave

He is saying that if you are going to mix in stereo, then make sure your PA actually covers the whole place in stereo. If you can only hear the left hang (for example) you should still be able to hear the stuff on the right side of the stage. Just having a stereo coverage of the handful of seats down the middle is not acceptable. If this is what you actually have, then mix the show MONO, NOT STEREO.

However, mixing in stereo can create awesome "board tapes"... if you need "awesome board tapes" to keep your job, you need to find a different job or work for someone else because you are shafting the audience... you know, those people who are actually footing the bill for this event to happen... you should be working for them. If the show sounds awesome, they will let the band know.
 
Re: Extension thread from " An Open List of Console Feature Requests" Thread

Yes I realise what he was saying. What I was hoping that people would be willing to share is what techniques they might use to "stereoize" and add depth and separation in a mono mix.

Always keen to see what other people are doing as I am definatly been responsible in the past for "masturbating at the board, while only 5% of the audience can participate in his fantasy world". I have really been thinking about this and have started mixing in mono but was curious about what other people have done.

Cheers
 
Re: Extension thread from " An Open List of Console Feature Requests" Thread

Stereo is basically standard, and stupid, in the live music world. It's not about having full audience coverage from both "stacks". It's the arrival times from those two stacks being tens of mS different for just about everybody in the audience. Bob McCarthy has a great paper on the problem.
 
Re: Extension thread from " An Open List of Console Feature Requests" Thread

Yes I realise what he was saying. What I was hoping that people would be willing to share is what techniques they might use to "stereoize" and add depth and separation in a mono mix.

Cheers

David,

There was a season when LCR installation PAs were in vogue. A younger, more naive version of me was all into them on principle. Later, though, as I was more active out in the world re-tuning systems, I found that these systems didn't work very well, even when they covered the audience sufficiently from each cluster in terms of level. I've come to appreciate that this results from time delay limitations for your brain to interpret imaging, assuming equivalent levels from each source.

I've come to decide that this time window is around 20ms, and that levels from each system element ideally need to be really similar, ideally less than 3dB. Ideally within 1.5dB and 15ms in all areas of each coverage zone. These days I tend to implement alternating L/R/L/R systems across the audience area, and try to break the audience into <20ms "blocks" from the available aim points. Stereo is a huge tool to help inexperienced sound mixers find more space in the mix, and I find that most people will tolerate a reversed stereo image pretty well. L/R/L/R systems are also more affordable to implement than LCR.

---

As for mixing examples, i'll put my own hand out there. The dropbox link is to a 320kbs mp3 file from a song I mixed at church back in January. Our room is currently configured as a mono, single point PA. This file below is the "printed" channels from the console including the equalization and dynamics processing on the PreSonus.

I put the channels back into the DAW, and then turn the mix into a faux stereo one using delay panning. For instance the acoustic guitar and electric guitar are both using "delay stereo" which is to say the same input at the same level in both channels, but different delay times. I believe the acoustic is 11ms and the electric is 13ms delay. I then apply reverbs that are as close as possible to the ones in the console, and export to MP3.

There is no additional equalization or fader automation on the file, any changes in level are the band playing together (or not), and the tones are the same that came out of the sound system. There is a tiny bit of limiting that grabs about 2dB of gain reduction on the loudest snare hit. The idea is to give the band something that they can listen to that feels like a "normal" recording while working on playing as a unit.

https://www.dropbox.com/s/sh9arlta0m9d4qh/01-17-16 In Christ Alone.mp3?dl=0
 
Last edited:
Re: Extension thread from " An Open List of Console Feature Requests" Thread

What I was hoping that people would be willing to share is what techniques they might use to "stereoize" and add depth and separation in a mono mix.
Cheers

Sometimes separation is not needed, what is needed is blending. Take the example of the background singers.
(Oh, by the way, before I go further, I have masturbated with the best of them. Not only with stereo. One time I took every single piece of processing/FX I had to a gig, and made a point to use every single channel. Sometimes it has been in an empty house knowingly for my only benefit. So when I talk about it, I know about it first hand. I move forward.)

A technique for balancing the back up vocals is to use stereo headphones, and AFL the back up vocals. Especially if you are not intimately familiar with who is singing what part, this can be very instructive on how to set the levels. Then it should be presented to the audience in mono.

Note that balance and separation are very different. There is no need for separation. Separation is when one vocal is 12 db louder than the other, and the other is heard vaguely somewhere from the other side of the room.

When musicians hear three part harmony, they instinctively can pick out the parts. When I hear it on the radio, I don't analyze it, I just take the overall effect, and roll with it. Groove with it. But when I mix it, I consciously listen to spectral tone, not notes. This can be misleading, as the notes will "separate" themselves. What is needed is balance.

I think two thinks can happen here. One, when people go to "see" someone they know, they want to hear their friend, even if they are the 4th chair violin.
So many times they are hearing their friend, and just don't know it, unless they are completely front and center.

(Possibly boring personal anecdote warning/ one time I was mixing a 12 piece band in a 2k seat theater. This was a solo performer, and debuting his most recent CD. He played almost exclusively solo, but had all the musicians from the album for a small series of shows. There are a couple of ways this can go. Sometimes the soloists' accoustic guitar is as much a prop as anything, and just blended in with the band. In this case, it was not. It was his voice, then his guitar, then the rest of the band. After the show the electric guitarists' grandmother came over and read me the riot act because she came to see her grandson, and she couldn't hear every note he played. I suffered her politely, and got a nod and a wink from one of the management types who was nearby. I of course was never hired by that guitarist again, but that was OK. That night, my bread was buttered by the solo artist, and the most important separation I needed was between him and the band.\possibly boring personal anecdote\)

Another thing that happens is that when one tries to get too much separation, then volume escalation happens. Note; this has as much or more to do with the band than the person mixing the sound. To my mind, when things are good, all that needs to be done is the channels turned on, and minimal futtzing.

In my thoughts an example of not needing separation is some of the Scorpions songs with three guitars. I like when the third guitar just goes chuga chuga chuga on the same note, for an extended period, while the other guitars do some meedly meedly, or crunch and stop, and the bass boings around on notes complimenting the other guitars. This interspersed with vocals. I can't always hear the chuga chuga chuga, sometimes it's only uga uga uga, and sometimes it's eclipsed, but it is always there, and when it is covered, I still know it's there, and when it comes back, I know that it was never gone.
Separation is not needed, layers are what is important.
 
Re: Extension thread from " An Open List of Console Feature Requests" Thread

He is saying that if you are going to mix in stereo, then make sure your PA actually covers the whole place in stereo.

In my mind, it goes beyond this. Just getting coverage to the whole audience from both hangs is not enough. The whole audience needs to be within an certain distance of both hangs. There are very, very, few instances where this happens.

Stereo is basically standard, and stupid, in the live music world. It's not about having full audience coverage from both "stacks". It's the arrival times from those two stacks being tens of mS different for just about everybody in the audience. Bob McCarthy has a great paper on the problem.

Is this the one titled "Debunking Stereo"? This was linked on the old LAB and was the piece that turned my mind around about stereo.

One bit was about spacing of speakers. (I paraphrase liberally, when it would be better to be able to post the link, but I can't find it anymore, and wish I had saved it.) (I paraphrase because I don't know as much as the author, and don't have a good enough memory on how it was put exactly.) In a small space, such as a studio, say the speakers are 8'/2.5M apart....

(As I am writing this post, I am thinking about how our brain hears stereo, and for me at least as I formulate this, I think that now is a good time to put in a refresher on our brain, and placement. When we hear something, we hear two sounds, one in each ear, and the brain does some fantastic computation, and places where this sound comes from, by differentiating, and comparing the spectral and time differences. The spectral differences are from what is in the way, or not, of each ear, the head, hair, the back of the ear. The time difference can be very minute, as the difference to reach one side of the head vs. the other. (Then there are reflections and room, but that is a Raimonds type of discussion.) When we listen to two speakers, unless the signal is hard panned, we now hear four signals. Both ears, from both speakers. Even if we listen in headphones, where we have isolated it back to two signals, it is very different from what we are used to, IRL (in real life). The only differences are volume, not tone or time. It does move the sound from side to side, but it is very artificial.)

....even if we are "outside" of the speakers, say three feet to the right of the right speaker, we are still close enough to both sources in volume, time, and tone, to be able to hear both speakers distinctly, and coherently. I am maybe only 6'/2M different.

But take the example of my sitting in the tenth row, four feet from center. In this case, I was not only much closer to "my" speaker, but also more in it's pattern, and more out of the pattern of the other speaker. I have tens of ms difference, lots of volume difference, and huge tone difference.

Even if the speaker system was covering a full 90 degrees, and only addressing those "inside" the speakers, there are still two out of three against me.

So who does benefit from stereo? In this case, there are front fills, so not those people, so about the fourth row, those in the very center will hear stereo, as one moves further back in the hall, the stereo portion will widen. This is because the further away from the stage you get, the closer in relative distance the speakers are to the ears. So in this hall, (about 2500 people, with most seats on the floor, about 45'/15M wide, and symphony style balcony, three rows high on left right and back), I'm guessing that most of the people in the back row (75%?) are hearing full stereo. With a wedge that starts in the middle of the fourth row, and goes back to there. So less than 1/3 of the audience is able to hear the correct balance.

When is stereo applicable? When you see a picture of an outdoor community event, with several people in the middle down front, and then a few more scattered through the middle, and then mostly seated around the sound board, with most of the other people in attendance doing something else associated with the event, like looking at the arts and craft booths, this is an example of when stereo will work.
 
Re: Extension thread from " An Open List of Console Feature Requests" Thread

Stereo is basically standard, and stupid, in the live music world. It's not about having full audience coverage from both "stacks". It's the arrival times from those two stacks being tens of mS different for just about everybody in the audience. Bob McCarthy has a great paper on the problem.

Getting back to this. If I am in the tenth row, and in a seat on one extreme side or the other. Even if both speakers are pointed right at me, and the far one is adjusted to that it is perceived to be the same volume as the close one, since the other is 40ms behind, anything panned to that side will not sound "as loud", and indeed, things panned to the center will be perceived as lower in volume, just because of the confusion from the other side. (My take on the subject, without knowing what is in Bob's paper.)
 
Re: Extension thread from " An Open List of Console Feature Requests" Thread

Stereo is basically standard, and stupid, in the live music world. It's not about having full audience coverage from both "stacks". It's the arrival times from those two stacks being tens of mS different for just about everybody in the audience. Bob McCarthy has a great paper on the problem.

Any chance of a link or the name of the paper??

Thanks Jack and Phil, this is exactly what I was after, time to now go and do some more study.

Cheers
 
Re: Extension thread from " An Open List of Console Feature Requests" Thread

Getting back to this. If I am in the tenth row, and in a seat on one extreme side or the other. Even if both speakers are pointed right at me, and the far one is adjusted to that it is perceived to be the same volume as the close one, since the other is 40ms behind, anything panned to that side will not sound "as loud", and indeed, things panned to the center will be perceived as lower in volume, just because of the confusion from the other side. (My take on the subject, without knowing what is in Bob's paper.)

If you are sitting in row A on the extreme left side of the audience, how do the left and right sides of an unamplified orchestra sound? I'm thinking the stage left section of the orchestra is going to be down in level and at least 40ms behind. Is this a big issue in acoustic concert halls?

The point is not that every seat in the house sounds exactly the same, it's that everyone in the audience has a good experience.

Mac
 
Re: Extension thread from " An Open List of Console Feature Requests" Thread

If you are sitting in row A on the extreme left side of the audience, how do the left and right sides of an unamplified orchestra sound? I'm thinking the stage left section of the orchestra is going to be down in level and at least 40ms behind. Is this a big issue in acoustic concert halls?

The point is not that every seat in the house sounds exactly the same, it's that everyone in the audience has a good experience.

Mac

This is a false equivalency. The very exercise of this board is to present music or events which are amplified through a sound system, which we the participants in, set up and control. I have to go to play basketball, but I have many thoughts on this. More on why I disagree, and one on where I think you are right, kind of. If no one else has spoken up by the the time I am back.

(edit) Have to? Ha, get to!
 
Re: Extension thread from " An Open List of Console Feature Requests" Thread

This is a false equivalency. The very exercise of this board is to present music or events which are amplified through a sound system, which we the participants in, set up and control.

I think the false equivalency is that everything has to be the same for every seat. That condition is never met unless there is a sound system involved, and I don't think it has to be met if there is one. Even with a sound system, if it is not a big one, in a big room, the sound is not going to be same everywhere because the live sound off the stage isn't going to be the same everywhere.

For speech reenforcement a better case can be made for the same sound everywhere, from an SPL level and intelligibility standpoint, but amplifying a band should be about presenting the best, most exciting show, and I don't think that requires a mono mix. If I want mono I can listen to AM radio. At a live show I want as much excitement as I can get in the presentation, and that often involves moving effects, and panned sources, and those both require stereo mixing.

Mac
 
Re: Extension thread from " An Open List of Console Feature Requests" Thread

Is this the one titled "Debunking Stereo"? This was linked on the old LAB and was the piece that turned my mind around about stereo.

Oh Boy, you'd better not tell that to the guy who started this thread ;)

https://www.gearslutz.com/board/liv...stop-insanity.html?highlight=mono+pa+insanity

The thread goes on for 11 pages, but you can pretty much get the tone of it from the first one.

GTD
 
Re: Extension thread from " An Open List of Console Feature Requests" Thread

FOH Operator typically is entrusted to make it sound "best for the most" in attendance - arguably a pretty fungible concept. There is rarely enough time or money to properly weigh all options and their compromises.

When things get really out-of-whack is when FOH is given the task of making it sound great in the donor seats, or to the tape, etc., at the expense of the larger audience. Better yet: "Make it sound like the record" in a hall with a broadband 6 second RT60
 
Re: Extension thread from " An Open List of Console Feature Requests" Thread

I would like to be able to recall scope individual aux sends on a per channel basis - that is, recall an aux for a particular player without changing aux sends for the other players.

I found these two post particularly interesting, In particular the delay panning and faux double and then Jacks comments.

I would be interested to know what techniques people use regarding these issues and what they do to combat what Jack is talking about when they mix.

Cheers Dave
 
Re: Extension thread from " An Open List of Console Feature Requests" Thread

A feature far simpler than most of those requested that I have missed on every console I have used since mixing on a DM1000 is the option of setting a channel strip's compressor to operate post fader. A simple and powerful tool. The new CLs can achieve this but only via the inserts, and not the normal Dynamics 1 or 2. I don't believe the Digico's I have used are capable of it.

Addressing the OP a little more directly. I have used the Dynamic EQ functionality on the Digico boards to do something similar (but not the same) as point 4. Specifically using DynEQ with shelving filters. This is useful, for example, with actors (radio mic'd) who get wooly when quiet, but sound fine when louder.