John Curl's Blowtorch preamplifier part II

Status
Not open for further replies.
Well, many people care about phase, including me. I really only objected to your claim that down-sampling can be 'perfect' ... if you relax the requirements, that hardly counts.


I didn't say there was energy "between" the bins. I'm fully aware of the tradeoffs of windowing and how it works - I merely stated that windowing will not allow a discontinuity in the frequency domain.

You attached a graph that shows pass bands frequencies as high as 24.74 kHz... are you sampling at 50 kHz? You'll have aliasing with the low pass response in your graph if you're downsampling to 48 kHz, and certainly to 44.1 kHz.


Sorry that was a random choice of frequency merely to illustrate a point. After asking our DSP guys it turns out polyphase filtering is now used to eliminate redundant computations. The brute force approach is mathematically equivalent but (much) slower. As a mathematical problem it converges to a unique solution given the information supplied rather than having built-in error that gets pushed around.
 
Your attached graph makes no sense. I don't see that anything is 'gone' in that graph. Again, the pass band is too high for 48 kHz sampling, so I really don't see what you're trying to say here.


There was a 27777 kHz tone at 0 dB totally unrelated to any sampling frequency. I just grabbed the graphical tool to pick a frequency for the brickwall SORRY it looks like around 24700 kHz. There is nothing but numerical noise below this frequency. Please read the vertical axis -156dB from 0 dB, noise at -180dB.

I ran that sites frequency sweep but at full scale rather than -6dB, same story, there is nothing but noise below the cutoff.
 
* EDIT: I recall that it was someone else in this thread who suggested zeroing FFT frequency bins to implement a brick wall.

The sidelobes go down with FFT length. At 192K and a 30min piece I admit this is not practical but other than creating pathological signals to test (huge amounts of out of band signal) the errors get very small.

Sorry folks for carrying on.
 
What I find most interesting is that many people consider digital to be 'bad' and analog to be 'good.' The FFT is a powerful tool, but it is not a blunt instrument to be wielded indiscriminately. The kinds of sweeping claims from this thread about what can be done in the frequency domain are most likely the cause for the harsh sounds that nay sayers hear in digital productions. Therefore, it pays to be knowledgeable about the limitations of the FFT or SRC and apply such operations carefully. The only reason I got into this repetitious debate is that claims were made about down-sampling being a mathematical ideal that can be implemented 'perfectly' - or if processing power is lacking then the ideal can be approached within a certain number of bits. The point I have been trying to make all along is that only up-sampling can be performed without aesthetic choices, and thus up-sampling is the only process of the SRC pair that can be done 'perfectly' (like a D/A to A/D process without the added analog stage). Down-sampling, by its very nature, requires some amount of arbitrary or aesthetic choice as to the size of the transition band, the performance within the pass band, and the attenuation of the stop band. Not only is there no ideal for that set of choices, but convolution with sin(x)/x does not implement the low pass at all, and polyphase filtering is merely one possible implementation for an aesthetic choice.

I'm not trying to say that digital SRC cannot approach or even beat D/A-to-A/D performance - I'd be putting myself out of one facet of my work. I'm just trying to make it clear that there is a very good reason why the results on InfiniteWave vary so much. For some reason I cannot seem to get any traction with that statement - just more jargon. There is no ideal down-sampling - an engineer makes a choice about the desired response and then chooses a given implementation. It's as simple as that, even if many SRC tools do not expose all of the available parameters.
 
I don't think that anyone disagrees that digital is here to stay. Our phones, TV, radio, etc have been converted, as well as dedicated audio reproduction. However, much has been 'fudged' over the years, cramming a potentially annoying medium into where we once had a good deal of listening pleasure. Not everyone is sensitive to digital compromises equally. Some people hate it, others put up with it (I'm one of those) and still others think it is 'the greatest thing since canned milk'. If we keep pushing at the problems and compromises, they will someday be completely 'under the rug' and virtually everybody will be happy.
 
Not only is there no ideal for that set of choices, but convolution with sin(x)/x does not implement the low pass at all, and polyphase filtering is merely one possible implementation for an aesthetic choice.

The impulse response of a lowpass filter has a sin(x)/x shape the choices are length and windowing. Compare some of the not so good results to the best ones, the impulse responses are truncated on the not so good ones i.e. less computational load.

The polyphase phase filter has a continuous integral form that converges to a unique solution. There are no free parameters.

Also please explain the "asthetic" limitations of the iZotope 64 bit steep. Essentially -180dB artifacts on all tests, 0 passband phase too. The endorsement of "Nine Inch Nails" not withstanding. :D
 
Last edited:
We are hung up on "perfect". The perfect answer to the ratio of the circumference of a circle to the diameter is Pi. It doesn't matter which formula you use to compute it you get the same answer. If you use too few digits you get an error.

Take the formulation of an FIR digital low pass filter and plug in that you want -180dB sidelobes and 1Hz transition band and see how many terms you need. Of course if you plugin infinite attenuation and 0Hz you get a singularity, that's what convergence to a limit is about. If you need 500,000 terms maybe your software barfs, BruteFIR looks interesting, I like the name.

I just think it's worth noting how easily we drift into "you can't make a perfect copy of a CD", they sound different.
 
Last edited:
One last comment then I'll quit. It might be fair to consider the pre-ringing as a asthetic issue. I personally don't buy it, I think folks look at pictures and go yuk. I have no interest in decending into that debate, I have seen no compelling evidence that the "pre-ringing" of linear phase reconstruction is audible. They even said it affects the bass, perfect. :D
 
Member
Joined 2004
Paid Member
No, the idea was to standardize the anti-imaging filter, then phase pre-correct for it at the recording/mastering end. It hung together as a system, and given the time (1985), if successful, the required anti-imaging filter could have ended up a de facto standard. All moot, of course- our management killed the project. It did give me a fun thing to do for a month, and I did end up having a delightful day with Dick Greiner (who was brought in to judge the feasibility and practicality- he liked it quite a bit, though was dubious about audibility), so it wasn't a total loss.

I like what I think the concept is- essentially correct the incoming signal to remove the known errors (that can be removed) so the recorded information is "correct", free of the errors caused by the capture/encoding process. Conceptually similar to the magnetic recording standards, making the field on the tape as close to idealized so the reproduce process can be standardized.

Its interesting to note that a few analog recorders included phase correction and some felt it to be a significantly audible enhancement, even though it was not ideal compensation. Leading to a "catty" comment- how many revered analog recordings made on typical Ampex/Studer/3M etc. recorders, even with phase correctors, have substantial phase errors, producing music "before its time?"

Does anyone have measurements of analog recorders passing the 1 sample pulse we agonize over? Or any of the other issues (relate bias to sample rate, etc.) I would like to see the effect of "sample rate conversion" of an analog recorder with a 100 KHz bias on musically generated ultrasonics. Especially after they have been low pass filtered by a 1" condenser microphone.
 
Demian, it is almost impossible to produce music BEFORE ITS TIME without some sort of tapped delay line in the filter system. It is true that PRINT-THROUGH could cause a problem, but it is more like pre or post ECHO, rather than pre-shoot.
MY analog tape recorders could record 10KHz square waves with remarkable linearity. About the same as SACD, today. They were phase compensated, very much like the later Ampex machines were.
Peter Craven has written an entire paper showing how eliminate pre-shoot, and it is available on the internet for free. You should look at it, and then we might have something further to discuss.
 
Peter Craven has written an entire paper showing how eliminate pre-shoot, and it is available on the internet for free. You should look at it, and then we might have something further to discuss.
Is this titled "Controlled pre-response antialias filters for use at 96kHz and 192kHz" from the AES 114th Convention, 2003? I actually found quite a few articles from Peter Craven, a few with shared credit.
 
As far as i know the PDF is not for free but i found an AIRE white paper :
http://www.ayre.com/pdf/Ayre_MP_White_Paper.pdf

I would like you to produce a musical recording that has a 0 to full scale and back again signal over three sample periods at 192kHz. The ringing does not equal pre-echo of the music only the impulses and steps in the pictures.

To quote Richard Orban http://www.261.gr/Digital Audio by R Orban.html

MYTH 3 – RECONSTRUCTION FILTERS SMEAR AUDIO
A very pervasive myth is that long reconstruction filters smear the transient response of digital audio and that there is therefore an advantage to using a reconstruction filter with a short impulse response, even if this means rolling off frequencies above 10 kHz. Several commercial high-end D-to-A converters operate on exactly this mistaken assumption. This is one area of digital audio where intuition is particularly deceptive.
 
Last edited:
Hi,

Hearing is the key word. Which will put us back into the eternal DBT debate.

Why should it?

On one side we are talking about filters that redistribute, reduce or eliminate time domain distortion effects.

Audibility research in the context is desirable, but not necessary to formulating the design goal (e.g. No Pre-Ringing or No-Pre Ringing and minimal post ringing or No-Ringing whatsoever). It may be necessary to produce such research to the standards required in Academia if I want my latest and greatest filter accepted by all as the "gold standard", have it duly worshipped at the AES convention and be hailed "He who conquers all filters", but not if I don't.

Equally, the other side keeps talking about lowering measured THD.

Nothing exists in audibility research to suggest that such are necessary, desirable or can even be perceived, but that does not stop the people who armed with LT Spice, oversimplified models and a sketchy knowledge of amplifier design keep trying to make amplifiers with "Parts per billion" distortion levels at 1KHz, usually using the obvious and subideal methods.

So lets not overdo this. Just to define an engineering goal does not require us to invoke the ABX Mafia nor are we required to demonstrate the fact that it will make an audible difference, in fact in most cases such a demonstration a priori is simply impossible, unless you are Baron Von Muenchhausen.

Incidentally, IIRC (my memory is shot, especially today), John Crabbe did some fairly extensive research on audibility of ringing, pre and post and concluded that with music in his test there was no audible difference between pre ringing or not, however artificial test signals made it very audible.

It was in an issue of HiFI news and Record Review when it was till worth reading...

Ciao T
 
Status
Not open for further replies.