John Curl's Blowtorch preamplifier part II

Status
Not open for further replies.
jacco, I think it is important to realize that I have contacted and spent time with a number of noted audio designers, even before Jan interviewed them. You might say that I also 'interviewed' them in my time, as I was talking with them in a technical way, and the feeling was mutual. It is important to note this, when someone thinks that I have 'gone out on a limb' talking about some esoteric aspect of audio design. Perhaps, I have gained insight from others as well. People, often think that I am an 'island onto myself', but I am generally speaking for a number of designers, and I will not ignore them by taking all the credit for myself.
 
Sounds a lot like the HDCD process. Both the capture side and the replay side are standardized and precisely specified. It seems to work well. Real time too.


Quite a few years before, but the process was also quite a bit different. The idea was to take a data point from the original file, Fourier transform around it (on the order of 2048 points on either side), multiply the function by the inverse of the phase response of anti-aliasing and anti-imaging filters, then back transform it and place the "new" datapoint in a new file. This had to be performed on each point of the original file, so it was NOT something that could be done real-time. However, we didn't have to lose the bottom bit and we didn't have to do multiple digital filters; additionally, the user's CD player didn't need any fancy hardware, just the standard anti-imaging filter.
 
The reason, if you don't remember, Nelson, that I turned Harold Beverage down, was that he answered my question the wrong way. I had looked at the schematic of one of his products, and I saw a CERAMIC cap used as a high frequency rolloff, the week before, or so. I DEMONSTRATED the measured distortion of the CERAMIC cap used in that way, with my ST Analyzer, to his VP at my office on Telegraph Ave in Berkeley, at the time. I sent him back to Harold Beverage with that info. Harold decided that he didn't care about the cap, and he told me so at the restaurant, so I refused to work for him. I still can't work with someone who don't accept new information. It becomes too hard to do it right. Where is the Beverage preamp today? Somebody probably went forward with the project.
 
I never said anything about time smearing. This has nothing to do with this problem. If you don't like the "ringing" of sin(x)/x you are in Wadia land.
I'm not talking about the ringing of sin(x)/x ... I'm talking about the fact that it is not a low-pass filter, and thus it does nothing to remove frequencies in source material that are too high for the target sample rate. sin(x)/x can only be used alone for upsampling. Downsampling requires that you first place the source material through a low pass filter (just like you would if feeding the same signal in the analog domain to an A/D converter), and this is why the results on InfiniteWave vary so much. If you skip this step, you will have aliasing unless the original material was already band-limited correctly.
 
So I'll ask again - what's the matter with a bit of delay? That's phase shift which is linear with frequency. Do you see it as a problem?
If the delay is identical for all frequencies then you just have latency. If the delay varies with frequency then you have phase alterations, and there are certainly those who complain about this - I'm merely saying that you cannot call something perfectly ideal if some folks will complain about the results.
 
If anyone would like to see the sort of distortion levels that I DID find with my ST analyzer, just look up my 1978 IEEE paper, 'Omitted Factors in Audio Design' and see the:
'SMPTE IM distortion in a low pass filter using a ceramic cap'. The cap was .01 uf 50V ceramic, and the source was 600 ohms, (supplied by the analyzer) Try it yourself.
 
Last edited:
To state it in another way your source is a 192K, you have no apriory choice but to assume that this was done correctly. You then, for instance, have say a 23kHz tone. You can then reproduce the value of that tone at ANY time exactly within the resolution of your math. Quantization uncertainty is a separate issue not fundamental to this problem. You can then using standard digital filter theory specify a low pass 22.1KHz that is -260dB at 23KHz to resample at 44.1kHz. My very first statement was that this is not computationally practical.
This example is beside the point. I'm have no worries about 23 kHz tones. I'm talking about a 192K recording that has 60 kHz and/or 30 kHz tones in it. Your sin(x)/x convolution will not remove those tones.

I think that part of the issue here is that InfiniteWave treats SRC as a single process from the point of view of the user, whereas it's really a two-step process for the mathematician. It seems that you're talking only about step two, which is indeed an ideally perfect process. But the reason that you see such wildly varying SRC results is that the low pass in step one cannot be specified to please everyone, and thus you end up with plenty of options, none of them perfect.

EDIT - That's 22.1kHz yes you can specify a filter that's flat to 20kHz and -260dB at 22.1kHz. You can reduce the aliasing to some arbitrary level, and I don't know if it's -260dB or -200dB I don't think it matters.
There are assumptions here that you don't care if your filter stops performing perfectly above 20 kHz so that it has room for a transition band. Many CDs are mastered with content well above 20 kHz, so maybe they have aliasing and maybe the mastering engineer used fancy low pass filters. My point is that there is no perfect filter because different people have different goals. This is the reason the results on InfiniteWave vary so much. Basically, it's a matter of taste.

You are correct that the math after the low pass can be perfect. But since the entire process must include both low pass and sample rate conversion, there can be no perfect math.
 
Last edited:
Status
Not open for further replies.