New presonus mixer

Re: New presonus mixer

Actually one of the earliest digital studio delay lines (delta-lab) used one bit digital encoding (delta-modulation). A simple above/below comparator drives a 1 pole integrator up/down until the comparator changes the direction of the fixed slope. If the clock rate, which is the same thing as the data rate, is high enough these could sound very good (say 1 meg bit per second). They were notorious for sounding very bad when pushed to too low clock rate in an attempt to get longer delays (digital memory was very expensive in the old days). A distortion called slope overload occurred in the margin but this is too much information about an obsolete technology.

JR

The "Defectron". I went with Lexicon PCM after hearing the "affordable" Delta Lab delay.
 
Re: New presonus mixer

Actually one of the earliest digital studio delay lines (delta-lab) used one bit digital encoding (delta-modulation). A simple above/below comparator drives a 1 pole integrator up/down until the comparator changes the direction of the fixed slope. If the clock rate, which is the same thing as the data rate, is high enough these could sound very good (say 1 meg bit per second). They were notorious for sounding very bad when pushed to too low clock rate in an attempt to get longer delays (digital memory was very expensive in the old days). A distortion called slope overload occurred in the margin but this is too much information about an obsolete technology.

JR
I still have one and it works fine. Effectron. It sounds good but runs hot. Yep, Delta mod.
I can't bring myself to get rid of it. Baby shoes, daddy's first digital delay.
 
Re: New presonus mixer

http://en.wikipedia.org/wiki/Nyquist–Shannon_sampling_theorem

Going to 96khz sampling rate from 48khz sampling rate only helps if your signals have content above 24khz. It is left as an exercise to the reader to determine if audio for humans does indeed have content between 24khz and 48khz.
Yes but it also moves allising filters out and away from the near audible range that do have some harmonic impact in the audible range. See Rupert Neve.
 
Re: New presonus mixer

http://en.wikipedia.org/wiki/Nyquist–Shannon_sampling_theorem

Going to 96khz sampling rate from 48khz sampling rate only helps if your signals have content above 24khz. It is left as an exercise to the reader to determine if audio for humans does indeed have content between 24khz and 48khz.

I'll be honest - I'm probably still missing something but my question is still unanswered. I've made it through most of the article and haven't found my answer. My question is simply if a wave started between a sample and the immediately following sample, how is that handled. I can't remember the particular way of avoiding that. I'd like to say it's aliasing related, but I simply don't know.
 
Re: New presonus mixer

I'll be honest - I'm probably still missing something but my question is still unanswered. I've made it through most of the article and haven't found my answer. My question is simply if a wave started between a sample and the immediately following sample, how is that handled. I can't remember the particular way of avoiding that. I'd like to say it's aliasing related, but I simply don't know.

I am not sure about this but I think there is compensation for leading edge to start the cycle. I know it is clocked but it was something Sony was teaching us when the CD players were just coming out.
 
Re: New presonus mixer

I'll be honest - I'm probably still missing something but my question is still unanswered. I've made it through most of the article and haven't found my answer. My question is simply if a wave started between a sample and the immediately following sample, how is that handled. I can't remember the particular way of avoiding that. I'd like to say it's aliasing related, but I simply don't know.

The word for today is "oversampling".
 
Re: New presonus mixer

I'll be honest - I'm probably still missing something but my question is still unanswered. I've made it through most of the article and haven't found my answer. My question is simply if a wave started between a sample and the immediately following sample, how is that handled. I can't remember the particular way of avoiding that. I'd like to say it's aliasing related, but I simply don't know.

It's like looking at a pretty girl through a screen door, you can still see where the curves start and stop.

JR

PS: There are some good videos floating around about sampling and digital conversion.
 
Re: Basic principles of sampling

I'll be honest - I'm probably still missing something but my question is still unanswered. I've made it through most of the article and haven't found my answer. My question is simply if a wave started between a sample and the immediately following sample, how is that handled. I can't remember the particular way of avoiding that. I'd like to say it's aliasing related, but I simply don't know.

Max,

This is a good question, and if you can get your head around how this works now it will really help you down the road if you decide to pursue any field that involves signals and systems (e.g. physics, electrical engineering, computer science, image processing, machine learning, etc.).

For the below, I'm not going to include any math, so you can either take the following on faith, or we can help you dig a little deeper on the math side.

Joseph Fourier (Foy yea), while working on the behavior of heat flow in the early 19th century had a revelation that many mathematical equations could be decomposed into weighted sums of sine functions of different frequencies. Fourier got some of the minor details about where you can apply this wrong, but the general principle was/is a revelation.

Today we tweak Fourier's insight to say that certain classes of signal, like an audio waveform, can be decomposed into a weighted sum of sine functions of different frequencies:

decomposition.gif

Each of these sines waves that we decompose the signal into has an amplitude, a frequency, and a phase relationship. As long as we retain these details for the sine waves, we can re-add them together and reconstruct the original signal. Note that we are not actually doing this decomposition during the analog to digital conversion stage, only that it provides a key insight about the nature of the input signal with respect to the frequencies it contains.

Given that our input signal can be decomposed and the reconstructed out of sine waves, this allows a key conceptual leap. Namely that there is a maximum frequency of sine wave needed to decompose the original signal. Now, this frequency would be arbitrarily high for a signal of arbitrarily wide bandwidth, but real signals don't have arbitrary bandwidth! Real objects have a frequency response over a specific range of interest.

We can use this fact and arbitrarily place an upper frequency limit on the signal of interest, using a filter to remove all the frequency components above this point. There's lots of detail to get lost in on how these filters are implemented, but conceptually the important thing is to include frequencies below the desired limit, and exclude those above the desired limit. This border frequency is known as the Nyquist frequency.

Now, returning to our decomposition principle, if we remove all of the frequencies above the Nyquist frequency for the input signal, then the Nyquist frequency represents the highest frequency sine wave left in the signal, if we were to decompose the signal into its constituents.

We already know the frequency of the Nyquist sine wave because we chose it! All that remains is to characterize the amplitude and phase of the signal's component at the Nyquist frequency. It is fairly straightforward to show that we can determine the amplitude and phase of a known frequency by "checking in" on the wave at two defined points during the wave's period. This is why the sampling frequency is double the Nyquist frequency. The "checking in" process is what we call sampling.

Once we know the phase and amplitude at the Nyquist frequency, we have completely nailed down the signal's behavior at the upper end of the bandwidth. Were we to decompose the signal into sine waves, all of the lower frequency components would have more than two sampling events per period, and therefore also have known amplitude and phase.

That's the operating principle of discrete time sampling of a band limited signal (i.e. one with no frequencies above Nyquist).

---
---

Now, once you've digested the above, let us approach your original question about the behavior of a signal that "start" and "stops" in between the sampling intervals of the system. The visual impression of where the wave is doing its thing is of no consequence. After all the underlying sine waves used when decomposing the signal are mathematically defined for all positive and negative numbers.

What matters is that the system has a defined bandwidth and at least two samples per period of the highest frequency component (i.e. the Nyquist frequency). As long as you meet that criteria, you've capture all the information about the system, regardless of the visual appearance of the wave form between sampling intervals. You can completely reconstruct the original signal from this data with no losses because you have captured all the details of the underlying frequencies that make up the signal.

You are getting tripped up, like most people trying to figure this out, because you are looking at the waveform behavior as amplitude versus time (i.e. left hand side of image above), which of course has values in time between each sampling time interval.

What people don't grasp is that if you sum up all the frequencies (i.e. sine waves), their sum provides the back the entire waveform versus time (like you're visualizing). Sine waves are defined for all times, and thus "fill in" the spaces in between the sampling periods. They key aspect of the sampling process is to fix the amplitude and phase relationship of each sine wave component so that they sum back together in the same relative relationship that existed in the original signal. Sampling isn't trying to peek at every infinitesimal instant of the signal versus time, but instead nail down the relationships between the underlying sine components at each frequency.

As a P.S. the process of taking information about the (sine) frequencies in a signal, and then summing them together to give a reconstruction of the waveform versus time is done using a mathematical operation called a Fourier transform. This is an operation you have surely seen bandied about on the forums, and it is a bit of math that lets you move from the frequency "domain" to the time "domain." It is an integral (i.e. sum) of the weighted sine frequency components in the signal.

(Math heads don't ping me for butchering the choice of kernel function, see Dan Lavry's paper for a bit more rigor)
 
Last edited:
Re: New presonus mixer

I'll be honest - I'm probably still missing something but my question is still unanswered. I've made it through most of the article and haven't found my answer. My question is simply if a wave started between a sample and the immediately following sample, how is that handled. I can't remember the particular way of avoiding that. I'd like to say it's aliasing related, but I simply don't know.
Check out this really good video

http://m.youtube.com/watch?v=cIQ9IXSUzuM
 
Re: New presonus mixer

What people don't grasp is that if you sum up all the frequencies (i.e. sine waves), their sum provides the back the entire waveform versus time (like you're visualizing). Sine waves are defined for all times, and thus "fill in" the spaces in between the sampling periods. They key aspect of the sampling process is to fix the amplitude and phase relationship of each sine wave component so that they sum back together in the same relative relationship that existed in the original signal. Sampling isn't trying to peek at every infinitesimal instant of the signal versus time, but instead nail down the relationships between the underlying sine components at each frequency.

Very nice

I have struggled to find these same words when trying to explain the summation of waves.



Sent from my DROID RAZR HD
 
Re: New presonus mixer

Well, ignoring the math, I got it, Phil! Thank you very much for your post and to Robert for his video link. Glad that I finally have a clue as to what really happens. The paper Phil linked to is also quite helpful. I'm about 3/4s of the way through. Once again, thank you very much.

In retrospect, I'm quite glad the individuals who worked this out did, since it makes much more sense their way than my way, and much more importantly, works their way.
 
Re: New presonus mixer

I don't know what is it about audio that makes people think it all new and still under development (well maybe some is) but Joseph Fourier lived from 1768 to 1830 so was not exactly designing digital convertor ICs. He is however behind the math that only now with modern computer power makes it really effective. Actually it probably wasn't until about the 1960s when a "fast" Fourier transform (Cooley-Tukey) was developed with shortcuts that were compatible with how computers crunch numbers that this really became accessible.

Now I can do a FFT (fast Fourier transform) on a few dollars of silicon DSP in less than a second, but i still can't explain how it works like Phil. :)

JR
 
Re: New presonus mixer

I don't know what is it about audio that makes people think it all new and still under development (well maybe some is) but Joseph Fourier lived from 1768 to 1830 so was not exactly designing digital convertor ICs. He is however behind the math that only now with modern computer power makes it really effective. Actually it probably wasn't until about the 1960s when a "fast" Fourier transform (Cooley-Tukey) was developed with shortcuts that were compatible with how computers crunch numbers that this really became accessible.

Now I can do a FFT (fast Fourier transform) on a few dollars of silicon DSP in less than a second, but i still can't explain how it works like Phil. :)

JR

Beat me to it JR ;)

Today, you can find DSP blocks in graphical editors for FPGA's or DSP's that simply show up as an icon with the now infamous "FFT" symbol on it. You plunk it down and there you go ;)

The part of the FT that really twisted my mind is that you can take series of discrete sample points from a continuous time based signal, store the information and then re-create the EXACT waveform without any loss at all (a perfect replica of the original). This works up to the Nyquist frequency (half the sample rate).

Pretty amazing piece of math work really.

As JR points out, the FFT (Fast FT) was the digital implementation of the calculus based FT (which I hated passionately in College). Many a College professor has tortured many a student by making them code up the FFT algorithm (including me). My personal opinion is that it is a total waste of time (I thought that when I had to do it too).

Understanding the usage of the tool on the other hand is invaluable if you are an EE.

This is why I am questioning the need for 96K. If latencies can be handled at 48Khz and phase alignment can be maintained, it is hard to see how it would make any difference (unless as suggested above, you consider the frequencies > 24Khz meaningful .... which I do not since most microphones and speakers can not operate above this frequency anyway).

Great posts guys. Very cool ;)
 
Re: New presonus mixer

This is why I am questioning the need for 96K. If latencies can be handled at 48Khz and phase alignment can be maintained, it is hard to see how it would make any difference (unless as suggested above, you consider the frequencies > 24Khz meaningful .... which I do not since most microphones and speakers can not operate above this frequency anyway).
One thing that comes to my mind: noise shaping! When sampling at 96kHz the noise to reduce quantization noise can be added in the non-audible frequency band (20-43kHz). This should improve the (perceived) signal-to-noise ratio.