First post for me

Re: First post for me

I was thinking that if you set a delay based on the distance between one of the sub drivers and the other, then there would be a different distance to the driver in the top. So out front, in the crossover region, you would end up with 3 sources, the 360 delayed sub, the sub and the top.

It sounds like in practice if I were to set the sub array and then use the total response of the array to set the crossover to the tops it would work out in the total response.

In practice the sub array behaves like the front facing box.
 
Re: First post for me

Brandon, you use a wavelet transform.

http://en.wikipedia.org/wiki/Discrete_wavelet_transform
http://en.wikipedia.org/wiki/Wavelet

That's what you see in these sorts of images: http://fulcrum-acoustic.com/wordpre...y-for-loudspeaker-transient-response-2005.pdf

P.S. You meant Hilbert, a method for translating between domains. Commonly used in our industry to calculate what phase should be from frequency response and vice versa. Handy for figuring out what effects are minimum phase and what aren't.


I appreciate the reply, its giving me a lot to consider. Let me see if I understand this at all :
A wavelet is combined (convolved) with the unknown signal (eg loudspeaker output) and then deconstructed (Hilbert Decomposition) and then somehow graphically represented .... This is different than Fourier transform in that a wavelet transform includes temporal resolution? So a Fourier transform does not contain temporal resolution ? What is the difference between the phase trace and temporal resolution ? Am I way off base here ?
 
Re: First post for me

I appreciate the reply, its giving me a lot to consider. Let me see if I understand this at all :
A wavelet is combined (convolved) with the unknown signal (eg loudspeaker output) and then deconstructed (Hilbert Decomposition) and then somehow graphically represented .... This is different than Fourier transform in that a wavelet transform includes temporal resolution? So a Fourier transform does not contain temporal resolution ? What is the difference between the phase trace and temporal resolution ? Am I way off base here ?

Hilbert Value Decomposition is very similar to how Time Delay Spectrometry based systems work and really has nothing to do with normal convolution used in Fourier based analysis. The "Hilbert" in its name simply refers to the quadrature sine wave used in this method to find the instantaneous frequency and envelope curve.

There is a rather inexpensive book by Cohen which is an excellent introduction to the 3-d time frequency representations. It is really quite readable if you just skip over the heavy mathematics.
http://www.amazon.com/Time-Frequenc...5321/ref=sr_1_1?ie=UTF8&qid=1323896062&sr=8-1

As far as applications which show off the 3-d methods the only one which comes to mind is Listen Inc.'s SoundMap.

The Wavelet TFR is really a poor choice for sub woofers because the resolution at low frequency is in favor of frequency and not time.
 
Last edited:
Re: First post for me

I appreciate the reply, its giving me a lot to consider. Let me see if I understand this at all :
A wavelet is combined (convolved) with the unknown signal (eg loudspeaker output) and then deconstructed (Hilbert Decomposition) and then somehow graphically represented ....

There's a rather lot of tweaky stuff to be discussed in this sentence, but lets start with the basics:
http://en.wikipedia.org/wiki/Convolution and following on with:
http://en.wikipedia.org/wiki/Convolution_theorem

The nature of convolution, and how it relates to the FT through the convolution theorem, get at the very heart of discrete time systems. The piecewise multiplication of weighted filter taps in the time domain gives the desired frequency domain behavior. This is one of the biggest AHA! moments one can have about the entire world of science and engineering, in my opinion. Realizing that pointwise multiplication of different sample values spaced at specific times (i.e. "taps") can than have this effect in the other domain (i.e. frequency response), is at the very core of essentially all we do in audio.

This is different than Fourier transform in that a wavelet transform includes temporal resolution? So a Fourier transform does not contain temporal resolution ? What is the difference between the phase trace and temporal resolution ? Am I way off base here ?

All transfer function analysis between two domains involves a "kernel" function. In FT analysis, the kernel function is a trigonometric one (i.e. sine or cosine). Sine and cosine are finite in frequency, but infinite in time. That is to say they have no start or stopping point when graphed out. Thus you trade certainty in one dimension for uncertainty in the other.

The tradeoff between the two domains is described by the famous Heisenberg uncertainty principle. To know the momentum of a particle precisely to, essentially, decompose it to a single wave, where the frequency of the sine wave is representative of the momentum. But, if you want to fix the particle precisely in time (i.e. space) you have to represent it as a the sum of many many different sine waves that combine together to provide an envelope function that represents the location of the particle. In the limit of this, the number of sine tones is infinite and the momentum data is traded for the precise envelope function. Thus physicists deal with similar behavior limitation familiar to those who make acoustic measurements.

The wavelet, by contrast, is a tradeoff kernel function. You trade resolution in one domain to learn more about the other. Heisenberg dictates you cannot know both at the same time, but you can get an idea of both simultaneously. This is the essence a wavelet-type kernel rather than trig kernel function in classic Fourier analysis.

Consider the FT complete trade of one domain for another, all time information for all frequency, while the wavelet tells you something about both domains up to the Heisenberg limit.
 
Last edited:
Re: First post for me

Really more questions than answers, but I appreciate the time you guys have taken to reply. This kind of thing should come up more often on this forum IMHO. I'll quit hijacking this thread for now.
 
Re: First post for me

The nature of convolution, and how it relates to the FT through the convolution theorem, get at the very heart of discrete time systems. The piecewise multiplication of weighted filter taps in the time domain gives the desired frequency domain behavior. This is one of the biggest AHA! moments one can have about the entire world of science and engineering, in my opinion. Realizing that pointwise multiplication of different sample values spaced at specific times (i.e. "taps") can than have this effect in the other domain (i.e. frequency response), is at the very core of essentially all we do in audio.

Sorry if my reply was confusing. All of these methods of analysis are related at a fundamental level. Their algorithmic application to actual signals just differs significantly. Hilbert Decomposition and TDS don't determine the response of the system from H(z) = Y(z)/X(z) the way FFT/Convolution based methods do.

Time Frequency Representations are not really measuring the response of a system in their typical application. Though there is a paper on AES showing the operations using the Wigner-Ville distribution which replicate TDS and many more on IEEE Signal Processing proving the equivalence.

More commonly in the audio industry the TFR is applied to the impulse response measured by another method, or directly to the recorded time waveform. A real time Spectrogram is commonly applied to the recorded time waveform. The Wavelet Distribution TFR images Bennett posted were from measured impulse responses.

Sine and cosine are finite in frequency, but infinite in time. That is to say they have no start or stopping point when graphed out. Thus you trade certainty in one dimension for uncertainty in the other.

Just to add to what you were saying;
The uncertainty principle for sinusoidal signal processing is very simply stated:
T = std dev of duration of the signal
B = std dev of bandwidth of the signal
TB >= 0.5

Note there is nothing in this statement regarding the resolution of an FFT, this is a common misconception. It just says you can't have a signal which is arbitrarily small in either time or frequency. Or more exactly, you can't have a Fourier pair x(t) and X(w) that have support which is arbitrarily small. The Gaussian pulse has optimal support in both time and frequency.

Since our whole idea of time and frequency is based on Fourier mathematics, all TFRs try to achieve this same level of concentration in the t-f plane at the expense of other properties. The other properties might be really important, like being positive, not being complex valued, not working on multi-component signals (ie. music), or like the WVD producing interference from nonlinear instantaneous frequency.

The wavelet, by contrast, is a tradeoff kernel function. You trade resolution in one domain to learn more about the other. Heisenberg dictates you cannot know both at the same time, but you can get an idea of both simultaneously. This is the essence a wavelet-type kernel rather than trig kernel function in classic Fourier analysis.

The reason why the STFT has poor T-F performance is not because of the uncertainty statement above. It is because when you make a very small time window of a signal you have a whole new uncertainty principle based on the size of that window. The STFT works by slicing up the signal into small time segments and taking the FFT; ie apply the above uncertainty statement to the new windowed signal.

The wavelet TFR works in a similar way except that the window is also a function of frequency. So a Wavelet TFR is like a sliding STFT where the frequency concentration changes with the frequency.

I put the source code for Octave or MATLAB for WVD in the DIY audio section.
http://www.soundforums.net/live/threads/3087-A-couple-of-Time-Frequency-Analysis-MATLAB-programs
 
Last edited:
Re: First post for me

Just to add to what you were saying;
The uncertainty principle for sinusoidal signal processing is very simply stated:
T = std dev of duration of the signal
B = std dev of bandwidth of the signal
TB >= 0.5

Note there is nothing in this statement regarding the resolution of an FFT, this is a common misconception. It just says you can't have a signal which is arbitrarily small in either time or frequency. Or more exactly, you can't have a Fourier pair x(t) and X(w) that have support which is arbitrarily small. The Gaussian pulse has optimal support in both time and frequency.

Mark,

Excellent point, I should have clarified that this is signal property, not a FT or FFT property.

The reason why the STFT has poor T-F performance is not because of the uncertainty statement above. It is because when you make a very small time window of a signal you have a whole new uncertainty principle based on the size of that window. The STFT works by slicing up the signal into small time segments and taking the FFT; ie apply the above uncertainty statement to the new windowed signal.

Mark, to be clear I wasn't discussing the STFT, but rather the classic continuous FT. STFT, while very important in practice, seemed like a conceptual "bridge too far" for the core concept that a given resolution in one domain requires a tradeoff with the other. The mathematical idea of support is also one that is probably too deep here.

The wavelet TFR works in a similar way except that the window is also a function of frequency. So a Wavelet TFR is like a sliding STFT where the frequency concentration changes with the frequency.

Right, generated "automatically" from the mother wavelet.
 
Last edited: