When you are using a dual channel FFT analyzer, SMAART, EASERA, etc... What does the loop-back channel do to improve the measurement over just a single channel measurement?
When you are using a dual channel FFT analyzer, SMAART, EASERA, etc... What does the loop-back channel do to improve the measurement over just a single channel measurement?
When you are using a dual channel FFT analyzer, SMAART, EASERA, etc... What does the loop-back channel do to improve the measurement over just a single channel measurement?
Mark said:...Lets say I am generating pink noise from my laptop, and trying to measure the transfer function of a speaker, what is the advantage of the loop-back cable then? Why don't I just measure the the transfer function from the actual output I'm using direct to the input I'm actually using? Then use that to correct for any timing, AA filtering, other problems. I already have the data for the pink noise being generated inside the computer.
Dual channel FFT's make some assumptions regarding the data being fed into it: 1) Both the measurement and reference inputs of whatever interface is being used must treat the individual inputs the same in terms of frequency/phase response, 2) The timing between the two channels must be stable, or synchronous. Get those two assumptions right to begin with, and you're starting down the road towards making valid measurements with a dual channel FFT.Lets say I am generating pink noise from my laptop, and trying to measure the transfer function of a speaker, what is the advantage of the loop-back cable then? Why don't I just measure the the transfer function from the actual output I'm using direct to the input I'm actually using? Then use that to correct for any timing, AA filtering, other problems. I already have the data for the pink noise being generated inside the computer.
Dual channel FFT's make some assumptions regarding the data being fed into it: 1) Both the measurement and reference inputs of whatever interface is being used must treat the individual inputs the same in terms of frequency/phase response, 2) The timing between the two channels must be stable, or synchronous. Get those two assumptions right to begin with, and you're starting down the road towards making valid measurements with a dual channel FFT.
Smaart 7.3 kinda lets you do what I think you're describing with their internal "loopback" feature, however the clocks *must* be synchronous output vs. input, which limits which audio interfaces will work with this feature, and also lengthens the propagation time giving you approximately 40-50 ms additional delay time for the signal to be eventually routed back as "reference" into a measurement engine.
This also limits you to using ONE interface to handle ALL inputs and outputs (for the sake of clock synchronization).
Nevertheless, Langston raises very valid points about differences in the signal between what the dual channel FFT thinks in "real" and what is "reality."
is the most important reason COTS measurement systems are implemented as dual channel FFT. This value can be as high as 100ms on low end hardware which would be bad for any type of transfer function measurement (even a std dev<1ms is not good.) For ASIO I have seen histograms which are close enough to real-time (i.e. hardware designed with ref clock, sync pulse and trigger pulse) for audio work.Depending on the OS and sound card, every time you start your measurement software the roundtrip latency may change slightly.
Mark, even with matched pairs of channels, you still need clock synchronization between input and output, and between multiple interfaces, for your dual channel measurements to be valid. Read that sentence carefully again and again and again until it sticks in your brain. Put together any digital audio system using multiple devices, and you MUST have a word clock master and a bunch of word clock slaves in order for things to cooperate with each other in the digital audio realm. With dual channel measurements it's no different. At present, using only one computer audio interface is a constraint due to differing word clocks between multiple interfaces that are typically designed for a project studio where only one interface is used at any time. If you use multiple interfaces, one needs to be set as the master clock, the rest of them slaved to the master clock, a feature that is not available on most common computer audio interfaces. Yeah, maybe you can mess around with aggregate device managers, or there are exceptions... The new Smaart I/O box has an external clock bus that allows multiple interface boxes to be synchronized, same with the Systune Aubion interface box, and maybe the high end RME stuff and the Roland Octacapture, but everything else is a crap shoot for synchronization.
You are also making a lot of ASSumptions on what goes on behind the scenes, the majority of the delay with an internal loopback signal is due to the ASIO or CoreAudio drivers and how the computer audio interface interacts with those drivers, not Smaart itself. I'm not a programmer, nor did I stay at a Holiday Inn Express last night, but you can probably glean some insight in this thread, maybe converse with an experienced programmer like Adam about this:
new to both USBPre2 and SMAART7
With regards to calibration curves, how would you go about making these without a reference to refer to? In this situation, kinda the chicken before the egg scenario. Why make things more complicated than they need to be? You can get a decent interface for cheap, without having to go through that nonsense!
Not to get completely off my topic and into synchronization issues. I went looking for any papers published on this and I can't find any. There are a lot of publications on clock jitter effects, long haul digital audio, broadcast audio video sync, etc.. Does anyone know of a study on input-output clock synchronization effects on noise measurements? I would be really interested to know, if you take the computer OS timing issues out of the equation, what the effect on the measurements would be.
AES E-Library » Time Delay Spectrometry Processing Using Standard Hardware Platforms
Pretty recent and talks a little bit about all the issues I would imagine to be a problem. But doesn't have any actual data to show how much effect the problems have.
Not really sure how SMAART works. In my "theoretical" system it works like this:
Microphone->Cable->ADC->Buffer->Low level DMA->ASIO Wrapper->Main Memory.
While the reference is already in Main Memory.
No extra moves required. Pretty sure this is how EASERA works in single channel mode.
I'm confused. Whether you use a loop-back cable or not if you want to reference your system to something real it should be calibrated. If you're making relative measurements calibration curves from the output to the input are sufficient?
I think you are missing your own statements. If you are measuring from output to input, then you are making a TWO channel measurement. The input and the output. If all you measure is the output, then there is no way to easily know what the difference between them is.
The loopback provides that reference.
Why don't I just measure the the transfer function from the actual output I'm using direct to the input I'm actually using? Then use that to correct for any timing, AA filtering, other problems. I already have the data for the pink noise being generated inside the computer.
Mark DeArman said:
Above was my original question. Assuming there are no timing issues for the moment lets say I measure OUT->AMP->IN1 and OUT->AMP->IN2. I now store these curves and use them as the reference.
Ivan perhaps you could elaborate a little more on your reply because I am confused about what I am missing.
Perhaps I missing something, but my understanding is as follows:
A single channel FFT does not in general measure a transfer function.
However, a single channel FFT using a pink noise source (flat frequency response) as the reference will more or less measure the transfer function.
I’m not sure if what you is proposing is a dual channel FFT where one channel comes internally from the computer or if you are proposing to use a single channel FFT where you correct for the shape of the original signal (?).
Yes I would totally agree with that.
Off topic, but pink noise doesn't have a flat spectrum. Normally an RTA has an internal reference filter for pink noise that makes the spectrum flat after filtering it.
Yes I think a lot of terminology is the communication issue on my part. The hypothetical system I'm talking about has an "internal" (in memory) loop back, reference, calibration curve, what ever you want to call it. Like I said, I am pretty sure this is the way it is implemented in EASERA, Sound Check, CLIO and a number of others. My question was whether there were advantages/disadvantages of the dual-channel (burn a hardware input) method over the latter.
I have gleaned a few from this discussion so far:
1) Differential measurements, Point A to Point B.
2) Music source or any source external to the measurement system.
3) Timing Issues in Windows or other OS.
4) Possibly poor COTS DAQ/soundcard hardware designs. I would still like to see some hard data on this one for a typical setup two channel (single stereo ADC) setup.
Thanks,
Mark DeArman
When you are using a dual channel FFT analyzer, SMAART, EASERA, etc... What does the loop-back channel do to improve the measurement over just a single channel measurement?
Why don't I just measure the the transfer function from the actual output I'm using direct to the input I'm actually using? Then use that to correct for any timing, AA filtering, other problems.
Strange, the extra propagation delay must be a behavior inherent to Smaart. In fact there should be more usable bandwidth available as the reference is already stored in memory. I don't see how a system designed to work as I described would suffer from extra propagation delay or clock synchronization issues after calibration measurements had been made.
The added delay isn't propagation delay, but rather the lack of propagation delay for the reference signal. With a software loop back, the reference signal has virtually no propagation.
Do you have any comments or additions to my list of applications for the actual dual channel measurement over the in-memory based system?
The advantage would be if you were using some source other than the programs generator to measure the difference.Thanks Adam,
Do you have any comments or additions to my list of applications for the actual dual channel measurement over the in-memory based system?
Mark DeArman
My question was whether there were advantages/disadvantages of the dual-channel (burn a hardware input) method over the latter ("internal" (in memory) loop back, reference, calibration curve, what ever you want to call it.)
I have gleaned a few from this discussion so far:
1) Differential measurements, Point A to Point B.
2) Music source or any source external to the measurement system.
3) Timing Issues in Windows or other OS. (Of the random variety.)
4) Possibly poor COTS DAQ/soundcard hardware designs. I would still like to see some hard data on this one for a typical setup two channel (single stereo ADC) setup. (Of the random variety.)