Re: First post for me
The nature of convolution, and how it relates to the FT through the convolution theorem, get at the very heart of discrete time systems. The piecewise multiplication of weighted filter taps in the time domain gives the desired frequency domain behavior. This is one of the biggest AHA! moments one can have about the entire world of science and engineering, in my opinion. Realizing that pointwise multiplication of different sample values spaced at specific times (i.e. "taps") can than have this effect in the other domain (i.e. frequency response), is at the very core of essentially all we do in audio.
Sorry if my reply was confusing. All of these methods of analysis are related at a fundamental level. Their algorithmic application to actual signals just differs significantly. Hilbert Decomposition and TDS don't determine the response of the system from H(z) = Y(z)/X(z) the way FFT/Convolution based methods do.
Time Frequency Representations are not really measuring the response of a system in their typical application. Though there is a paper on AES showing the operations using the Wigner-Ville distribution which replicate TDS and many more on IEEE Signal Processing proving the equivalence.
More commonly in the audio industry the TFR is applied to the impulse response measured by another method, or directly to the recorded time waveform. A real time Spectrogram is commonly applied to the recorded time waveform. The Wavelet Distribution TFR images Bennett posted were from measured impulse responses.
Sine and cosine are finite in frequency, but infinite in time. That is to say they have no start or stopping point when graphed out. Thus you trade certainty in one dimension for uncertainty in the other.
Just to add to what you were saying;
The uncertainty principle for sinusoidal signal processing is very simply stated:
T = std dev of duration of the signal
B = std dev of bandwidth of the signal
TB >= 0.5
Note there is nothing in this statement regarding the resolution of an FFT, this is a common misconception. It just says you can't have a signal which is arbitrarily small in either time or frequency. Or more exactly, you can't have a Fourier pair x(t) and X(w) that have support which is arbitrarily small. The Gaussian pulse has optimal support in both time and frequency.
Since our whole idea of time and frequency is based on Fourier mathematics, all TFRs try to achieve this same level of concentration in the t-f plane at the expense of other properties. The other properties might be really important, like being positive, not being complex valued, not working on multi-component signals (ie. music), or like the WVD producing interference from nonlinear instantaneous frequency.
The wavelet, by contrast, is a tradeoff kernel function. You trade resolution in one domain to learn more about the other. Heisenberg dictates you cannot know both at the same time, but you can get an idea of both simultaneously. This is the essence a wavelet-type kernel rather than trig kernel function in classic Fourier analysis.
The reason why the STFT has poor T-F performance is not because of the uncertainty statement above. It is because when you make a very small time window of a signal you have a whole new uncertainty principle based on the size of that window. The STFT works by slicing up the signal into small time segments and taking the FFT; ie apply the above uncertainty statement to the new windowed signal.
The wavelet TFR works in a similar way except that the window is also a function of frequency. So a Wavelet TFR is like a sliding STFT where the frequency concentration changes with the frequency.
I put the source code for Octave or MATLAB for WVD in the DIY audio section.
http://www.soundforums.net/live/threads/3087-A-couple-of-Time-Frequency-Analysis-MATLAB-programs