I read approximately once a month about some weekend warrior who has just acquired their first DSP (Digital Signal Processor) system controller and seeks to understand how its auto RTA (Real Time Analyzer) function can help them get better performance out of their portable sound system. Invariably the DSP unit is pressed into service for the first gig, a measurement microphone is attached, and some signal (usually pink noise) is run through the rig while the auto RTA is allowed to mess around with the 31-band input graphic EQ.
The resulting EQ curve is often alien and arbitrary, with huge boosts in the low end of the spectrum and alternating smaller boosts and cuts in the mid and high frequencies. The system operator then does what they should have in the first place and uses their ears, and their brain, to manually set the DSP’s graphic EQ back to some semblance of normalcy. The next show is in a different room, with the measurement mic in a different spot, and the process repeats itself with completely different, and equally wrong, results.
The inability of this process to arrive at a meaningful, consistent, and sonically pleasing result is what sends the enterprising sound tech to the LAB with questions. While they often get plenty of answers, the information they receive rarely tells the whole story, and there is always some small chorus of participants who insist that their Auto RTA helps them all the time, which makes judging the quality of the information they’re getting all the more difficult. Hopefully this article will help to set the record straight.
Auto RTA is not the answer to anyone’s problems; the very concept has serious flaws. Its continued use by engineers worldwide reflects a fundamental misunderstanding of what makes a measurement system and how it is used. Unfortunately there’s nothing simple about proper system measurement, but hopefully with a little education I can help engineers who are having trouble getting good results to understand what kinds of tools and techniques will help them make better sound. A little research and they’ll be well on their way down the road to making real measurements, and most importantly knowing the difference between what can be corrected for with EQ and what can’t.
The first aspect that must be considered when using any PA is the acoustic environment in which the sound system will be operating. No matter how much processing is available, or how adept the engineer is at using it, there is no electronic way to correct for either improper speaker positioning or for a room that has lousy acoustics from the start. In the real world we all have to deal with some of both, but nasty rooms are the bane of any system engineer’s existence.
It is possible to lessen the effects of a room that is extremely reverberant at, say, 500hz by notching that frequency on an equalizer, but that is a double-edged sword. Removing that frequency from the PA will make it less likely to excite the room’s natural reverberance, but 500Hz is also where a lot of guitar and drum tone lives, so axing it can also make the system sound weak. There may be an increase in sound quality, but the underlying problem hasn’t been addressed, and any additional clarity is simply the result of going from terrible to not quite as bad. Arraying speakers so they keep sound off the walls and ceiling, reducing stage volume, reducing mains volume, these kinds of changes and compromises can make a big difference where all the EQ in the world cannot and they all address the real issue: too much sound energy in the wrong places.
I find too many bands and sound companies creating problems for themselves through speaker placement because “that’s the way they’ve always done it”, failing to take into account that no sound system is a one size fits all solution. Of course, most of us are working with limited amounts of equipment – if all you’ve got is a pair of speakers on stands your options are limited, but turning your mains a few degrees in one direction or another (or using a block of wood to angle them down, thereby avoiding the ceiling) can still make a big difference!
The point here is that, before you consider making any measurement of any kind, look and listen to your system setup to make sure you’re not fighting unnecessary battles with the wrong tools. I’ve been hired by more than one company to bring myself and my measurement rig to their venue in order to solve some problem or other, only to leave my equipment in its case the entire time because what they really needed was someone with good troubleshooting skills and an in-depth knowledge of how loudspeakers interact with each other and a room.
Now that I’ve laid out why you probably don’t need measurement as much as you thought in the first place, let’s return to the subject at hand: why an RTA won’t give you a useful measurement and why an auto RTA will only turn that bad measurement into bad system equalization decisions.
The first problem with the majority of RTAs is limited resolution. Commonly available 1/3rd octave units are far too inaccurate to be of any use in the first place, like trying to drive a car while looking through Venetian blinds. The “standard” resolution seems like enough because most of us have 31-band equalizers on our mains and monitors, but useful analysis requires at least 1/12th octave measurement, and preferably 1/24th or 1/48th octave! This may seem like overkill, but the reality is that sound is much finer resolution than most of our equalization or measurement tools, and the finer detail one can see the better ones decisions will be.
Here’s an example… I took two measurements of a sound system, one at low 1/3rd octave resolution, and one at 1/48th octave resolution. The first graph shows some trends, but doesn’t really help me see anything. In the second graph, it becomes immediately obvious that I have a serious comb-filtering problem that’s affecting my system’s high frequency response and making it sound tinny. Had I tried to change system equalization based on the first graph, I would have adjusted the overall tonality (which may help) but solved none of the real problem. The second graph shows me that the problem I have can’t be solved with equalization anyway, but I may be able to solve it by re-thinking my speaker placement.
The problem in this contrived case happens to be that I have more than one speaker reproducing the same signal covering the same listening position… the speakers interfere with each other and cause comb filtering. The solution is either to replace the two speakers with one speaker that’s louder but offers the same coverage, or to splay the two speakers further apart so their coverage patterns are no longer overlapping.
The resolution problem relates to how the RTA presents its information to the user, and isn’t so much a fundamental flaw with the tool as it is a flaw with the packaging. The more serious flaw with an RTA measurement is that the device doing the measuring has no idea what kind of signal went into the system, or at what time, in order to see the difference between what the PA should be doing and what it is doing.
To understand why this is so critical, let me introduce the concept of a transfer function (often abbreviated TF) analyzer, which is the real measurement tool we’ve been questing after in the first place. The transfer function measures what’s coming out of your mixing console on its way to the PA (let’s call this the source) and then uses a microphone (let’s call this the reference) to measure what that signal looks like after it’s come out of the speakers. The measurement software can then display the difference between these two signals, and the operator can see in an instant how much of the information they put into the speaker system transfers back out. In a perfectly linear system, the display would show only a horizontal line. In the real world it looks more like Figure 3 over here:
Of course, this is a simplification, because the transfer function can be used to see the effects of any system on any signal. By measuring what gets put in and comparing it to what comes out it is possible to precisely determine the actions of equalizers, crossovers, effects processors, microphones… almost anything audio. This tool is in no way limited to analyzing speaker systems, it just happens to be used for that a lot.
There are a handful of products on the market that have transfer function capability. You may have heard of Smaart, TEF, SIM, EASERA or some other tool that has fallen into such common use as to become an industry buzzword. For the scope of this article, all you need to know is that at their core all these tools do the same thing, albeit in slightly different ways. An experienced engineer seeking to accurately measure a speaker system could use any of these tools and, while some are more suited to live work than others, get usable results.
The most obvious benefit of the transfer function is this ability to show the changes a system makes to a signal, but the advantages don’t stop there. Because the transfer function knows what’s going into the system and what’s coming out, it can measure the difference in time between the two measurements and align them… this time difference can be due to anything from signal processing delay to the distance between the speaker and the reference microphone. Once this delay is compensated for and the two measurements aligned, advanced transfer function measurement tools then apply what are called “time windows” in order to separate the system’s response from the room’s interactions with it. To go back to my original example of the room that is excessively reverberant at 500hz, a properly set up transfer function would see the 500hz segment of the signal going to the mains and know based on this time alignment when to expect it at the microphone. Once that time has passed, it ignores unrelated 500hz content that may arrive at the microphone, “windowing out” a large number of room reflections and reverberance. The measurement engineer can then properly equalize the system without having their measurement affected by the room or other noise sources unrelated to the PA. Once they are satisfied with their system setup, other tactics can be used to deal with the effect of the speaker system on the room. If the only tool available had been an RTA, the engineer would have been unable to distinguish between a system response bump and a reverberant room, or between air handling system noise and what’s really coming out of the speakers.
There are a number of other technical things you can see with a transfer function measurement, like phase, that are beyond the scope of this article. Suffice it to say that there’s a reason advanced measurement engineers almost only use some form of transfer function, with RTA style measurements relegated to quick spot checks and analysis of room issues.
It should now be clearer why if RTA is a tool with limited application, auto RTA has no value whatsoever. An auto RTA computer doesn’t know the difference between the room and the sound system, doesn’t know what you’re putting into the system, and doesn’t know what the speaker system’s response should look like in the first place. All it knows is that the microphone is seeing less of one frequency and more of another, so it adjusts them back to some arbitrary baseline.
Measured deficiencies could be because the PA simply isn’t capable of full output down there at 40Hz, or it could be because your mid/high speakers are interfering with each other. Boosts could be anything from real system problems, to a room with long reverb decay, to having placed the measurement microphone too close to a wall. Unlike the system user, with a human brain and human ears, the auto RTA sees only numbers. It makes decisions based on far too limited information and often tries to solve un-equalizable problems with the only tool it has: 31-band EQ.
All measurements require human interpretation, because in live sound every part of the sound system, from the microphone through to the speakers, affects the final result. From the genre of music to the size of the venue, from weather conditions to the number of audience members, even a very well designed and exceptionally powerful computer doesn’t have enough pieces of the puzzle to do a good job. Most importantly, no computer hears like we do, so no computer can accurately judge the effects of changes it makes upon the quality of the sound.
If computers could make accurate equalization decisions I’d be out of a lot of work, and if a 31-band EQ were all one needed to correct sound system problems, I’d be living in a gutter. Fortunately, making a sound system sound good requires a diverse skill set that no piece of electronics can master.
Any measurement tool is just one small portion of an experienced system engineer’s arsenal. Only by carefully considering a situation based on their knowledge of acoustics, skills with a good measurement tool, and past experience can any engineer make good decisions. I have been very careful to make it clear throughout this article that all these tools only aid the measurer in deciding how to solve a problem they face. No squiggly line on an analyzer display shows anyone how to optimize a system, but it does give them more information to apply to an issue they can hear. Equalization starts and ends with the tools attached to the sides of your head.
If you’d like to start measuring systems, or are already measuring and would like to learn more, there are a number of excellent resources available to help. I highly suggest any of the SynAudCon ([url]http://www.synaudcon.com/[/url]) classes available throughout the year, they’re worth their weight in gold for any sound person at any level of expertise. The makers of one of the most popular measurement tools on the planet, Smaart ([url]http://www.rationalacoustics.com/[/url]), also have a wealth of information on their website, as well as frequent training courses. Like professional sports, the only way to really get better at measuring is to measure a lot, so break out those microphones and get to work!