What do you think makes NOS sound different?

ADC devices could easily provide accurate samples at a required sample rate, i.e. 192kHz.

What that it mean? It means that the possibly a main reason for upsampling is noise shaping. There is a lot of ultrasonic noise in nature not related to music. By example fluorescent or LED lights. This is my answer.

I think it's not. building a "brick wall" low pass filter for an ADC that is NOS at audio frequencies is hard. very hard indeed. It would have to have very steep stop band attenuation and very little pass band attenuation and phase shift.

That's how I see it. If given the task of building a low pass filter for an OS ADC with stopband starting at 50kHz and achieving a stopband attenuation of 60dB at 500kHz against designing a low pass filter for a NOS ADC with stopband starting at 18kHz and stopband attenuation of 60dB at 22kHz, I would want to take the first option ;)
 
AFAIK that just boils down to how many samples are included in the sinc (or whatever) recinstruction algorithm. The more samples you take into account, the more delay you will introduce into a filter directly inserted into the replay chain. The more samples that lie before the sample currently being played - well that increases the delay. In a way the reconstruction algo wants to "look into the future" - which none of us can. So it will delay all incoming samples referred to the outgoing samples by the amount of samples it wants to look into the future. If you would insert the PGGB algo into your realtime playback chain there would be a delay in the order of around a minute. If your CPU can even do the calculations in realtime.

Yes, it essentially boils down to how many coefficients are in the FIR filter's kernel, and the rate at which the samples are being input to the filter. The author of the PGGB software views his creation currently as an "offline" resampler because of the latency incurred. Even a thirty second delay is impractical for track changes, in my opinion. Track fast forwarding/rewinding would seem intolerable. The delay can be greatly reduced, however, if the entire source track could be very rapidly read in to the PC just prior to playback.

You still need some kind of analog reconstruction filter to make the analog waveform look "not like a staircase". Which it probably wasn't when recorded. So a staircase form of reproduction is incorrect - unless you recorded one, which is very unlikely. The oversampling aspect simply makes the demand on the analog reconstruction filter way less stringent because with proper oversampling there appears to be a big gap between the signal and any aliasing images.

The DAC chip outputs a discrete-time (stair-stepped), yet still analog signal. The stair-step is simply a consequence of the DACs NRZ or sample & hold operation, also known as Zeroth-Order-Hold. Reconstruction filtering is what converts this output back in to the original continuous-time signal. It should be noted that OS interpolation-filtering still leaves the DAC's output as discrete-time, no matter how high the interpolation rate. The stair-steps simply become smaller. An analog filter is required to truly convert the discrete signal to continuous-time.

I thought that at least in theory OS DACs would be less prone to clock jitter, because of the averaging effect of having more samples available per time unit?!? Probably someone with more math insight could chime in please, but this one feels so logical

I'm also uncertain.

Regarding the changes in tonality: These are so small that even the tiniest amount of EQing will totally swamp them and then there's the question of whichever is "correct" in the first place.

Perhaps, but the SINC envelope of an un-corrected ZOH treble droop affects 20KHz down to 5KHz. Two whole octaves.

My theory: A lot of the NOS DAC proponents have been basing their choice of whichever is better (NOS/OS) on these small changes in tonality and went off in a certain direction, assuming that the NOS sound was the correct sound and never did any research if probably some other components in their chain could be the much more easily fixed culprit. Just an assumption/theory...

You're free to have a theory. By the way, your statement above could eaily be reversed, to instead suspect the same of those who assume that their DAC is not a problem, yes?

As a side note: THD/IMD in amplifiers AND speakers will change the tonality by a good amount. And these distortions might add/cancel with THD/IMD in the signal presented to the amp/speaker depending on their magnitude and phase. But this is a total guessing game without measurements.

Who said anything about not utilizing measurements? Or, that those who prefer the sound of NOS have not researched theory? Why are you assuming so much that is incorrect?
 
Last edited:
It's small changes in tonality (and spacial resolution) that let me decide where the flaws were in the latest echo listening test. I already did a first round of retest and more to come by the way - so let's see if my listening results were not a lucky case of correct guessing...

Did I understand correctly that you want to do three tests, so two more rounds to come? As long as neither you nor I know the outcome, we can change it to any number you like, we just have to define it before comparing the results to the correct values.

Edit: I hadn't seen your e-mails yet because they were automatically moved to the spam folder...
 
Last edited:
Images are products of the conversion, there are modulation products, but not intermodulation, as there are in a separate band. It is a different situation to aliasing, but a good recording is free of aliasing energy. Intermodulation can only happen if images are distorted somewhere on the audio chain and products fold down to the acoustic band.

Fair enough. Call them whatever you like, but they are sum and difference products of multiples of the sample rate and the audio signal.

Elaborative speech about pulses behaviour is irrelevant to me, as music is subject to band limiting during A/D conversion.

I agree entirely, but I don't see how that matches with what you wrote here:

A presence of Nyquist images is required to reproduce fast pulse response in a lab, with a limited bandwith there is inevitable ringing. These images increase sensitivity of our receptors (which are not analog, despite of a popular belief) and reconstruction in our brain result in more natural, better articulated sound.
 
MarcelvdG,
It is critical to distinguish between modulation and intermodulation. The first one is reversible, second one is not. But it is fine if you have accepted a correction.

It seems there are two ways of reconstructing original sound. One as seen in a lab, it doesn't require images to be present, but full reconstrution is not possible in practice.

A second one it is how we process sound. Your assumption is that it is done the same way (i.e. analog receptors with nth order analog or digital filter). We know already that our receptors are digital, so my point is that we don't really need fully recostructed analogue waveforms, our brain can use a raw discrete (unreconstructed) output instead, but in this form a full dynamic recontruction require information contained in images.
 
Last edited:

TNT

Member
Joined 2003
Paid Member
TNT,

Actually, we made no assumptions that one approach was superior to the other. In fact, I mentioned near the beginning of the thread that I hear each as having it's subjective advantages over the other. We simply were looking for what was making typical OS and NOS DACs sound characteristically different, when it seems they shouldn't. :confused:

After appearing to have identified the reason, I now find it interesting that very-high performance OS interpolation sounds much closer to NOS, than it does to 'typical' OS.

My recent 2 posts is by no means criticism to the goal of this thread.

What I see in comercial products, both ADC and DACs is a consequent violation of the sampling theorem by not conducting proper filtering. Both ADC (e.g. TI 4222) and DAC almost always fail to meet the "no energy" beyond fs/2.

My guess is that this is driven by product managers which need to secure a position in the spec sheet war wrt. FR. This has driven the designs to being "faulty"... Pity, as I think this aspect has a lot to do with the "digitaliz sound" problem.

//
 
This brings about an alternate point. A recording is made whereupon adjustments are made by a sound engineer based upon the impression they have in the playback of a recording. Hence if all else is equal, as to include the strengths and weaknesses of the DAC, the output we hear is equal to that of the sound engineer having modified the presentation to his liking. For us to use a different DAC (as could be perfect) can change the presentation from that intended by the sound engineer.

Is it in our interest to replicate the creative control and experience by those that generate music? In other words, should we use the same DAC. Are recordings variant for reasons that sound engineers, or artists, are using variant DAC's in the creative process?

It is considered that DAC's are becoming less expensive in performing tasks of equal performance to that of professional equipment. This questions if modern reproduction is improving for that reason, seemingly reinforced by my positive experiences with Delta-Sigma type DAC's... albeit not yet to the extent most positive using NOS on many recordings.
 
For the last 20 to 25 years or so any audible difference between DACs (if there is an audible difference) is due to the analogue circuitry surrounding them rather then the DACs themselves.
Some 25 years ago a panel of trained listeners could not tell a difference between live (albeit amplified) and the same signal going through a 24/96 AD-DA conversion.
 
^^^
Not my experience at all, having designed some of the digital portion of an AK4499 dac (and having done some unpaid consulting on commercial ESS designs). However, I would agree the analog portion is where the largest part of the audible difference usually lies. Analog of course includes clock oscillators and power supplies, not just input/output stages.
 
Last edited:
For the last 20 to 25 years or so any audible difference between DACs (if there is an audible difference) is due to the analogue circuitry surrounding them rather then the DACs themselves...

As we've seen in one our experiments here, not all audible differences are due to the analog circuitry. Some important subjective differences appear to be due the OS interpolation-filter implementation. Which can introduce audible artifacts depending on it's design, or if not high enough performing.
 
Is it in our interest to replicate the creative control and experience by those that generate music? In other words, should we use the same DAC. Are recordings variant for reasons that sound engineers, or artists, are using variant DAC's in the creative process?
For these tests we should use NOS DAC. How other way we can get an idea about sound properties of NOS?

As for music creation, it is made to please a majority users (which is Delta-Sigma), loudness war is an example and nothing has changed. Recordings are made a way to maximize sales.
 
176.4kHz File Release Update

The 176.4kHz PGGB upsampled files will be released soon. There was a delay while Hans and I thought about what bit resolution the upsampled files should be re-quantized to. This decision turned out to be more involved than you might imagine :D. We've settled on 24-bits, and now are just waiting to receive the PGGB processed files.