What do you think makes NOS sound different?

Switching back to the original 44 track I had this extra NOS dynamics which is so hard to explain, but you know it when you hear it.

Great all around contributions ddac. Personal thanks in particular to include the square wave results in post #1204.

From early on it was thought that the core cause of issues with digital were not fundamentally related to sampling rate or bit depth, rather of issues with converting a string of line sampled recorded values (being presented in theory as perfect), into a physical reality. The audio file created from line samples can be described as a "quantum noise terminated file”, whereupon “noise" is herein defined as anything not signal. Hence perfection in theory equates to perfection in reality if numbers created generate in absolute of proportion a voltage/current output changing in zero time in repetition of absolute fixed periodicity.

Seemingly the most important implication is that any form of digital manipulation of a digital file that was originally generated as a result of line sampling of analog signals requires an analysis that in doing so does not generate noise (as anything not signal) that substantively corrupts the originally "perfect" file. This is to suggest that OS seems only of advantage as variant from perfection in preventing signal corruption in the analog domain, whereupon to produce an analog signal from a numerically perfect digital file would require responses impossible to perform, including infinite bandwidth.
 
...I found out that Polynomial-1 was the closest to the original square wave...Last but NOT least, listen to the 4 original tracks from Hans with and without HQPlayer upsampling with the polynomial-1, FIR and IIR filter settings and see if I can hear any significant difference and if so what I like best

As very last, compare the one setting and track I (maybe) liked best and compare with the built in filter and upsampling function in Roon….

Stay tuned….

PS, VERY unfortunate, also in HQPlayer there is no zero-order sample and hold filter which just up samples…. But as said, the polynomial-1 comes very close

Doede,

Do you have, or can you point me to, any more information about HQP polynomial-1? I'm trying to find our exactly what it does, i.e., linear-interpolation, or slow roll-off SINC interpolation or something else. Linear-phase or minimum-phase. That sort of thing. I did a quick search but did not readily locate anything. Thanks.
 
Let me summarize what I understand so far, and some questions:
- OS in itself should not result in any difference from NOS, provided there is no digital reconstruction filter (n times repeated samples), and NOS sinc droop is properly handled
- HW based (real time, limited number of taps) reconstruction filters cause bigger subjective difference than advanced post-processing SW filters
- the effect of a digital reconstruction filter is essentially a low-pass filter at the Nyquist frequency in the analog domain. Not a 'guessing' between the samples.
- The effect of increased jitter in OS should be evaluated - has it been discussed before?
- The effect of phase response of the reconstruction filter should be studied as well - is there a zero phase shift DF?
- NOS with sinc droop correction filter in the analog domain also has phase shift - which is better: with or without?
 
The images are intermodulation products with multiples of the sample rate.
Images are products of the conversion, there are modulation products, but not intermodulation, as there are in a separate band. It is a different situation to aliasing, but a good recording is free of aliasing energy. Intermodulation can only happen if images are distorted somewhere on the audio chain and products fold down to the acoustic band.

Elaborative speech about pulses behaviour is irrelevant to me, as music is subject to band limiting during A/D conversion.

FFT plot is only valid for a static sinewave(s), it is why you can disregard images and focus on reconstruction only based on the frequencies below Nyquist. In result you get ringing. There is some additional information in images, their phase relationship may reflect constant music transition. This is my private theory that a dynamic signal reconstruction require to include information in images as well.
 
Last edited:
Doede,

Do you have, or can you point me to, any more information about HQP polynomial-1? I'm trying to find our exactly what it does, i.e., linear-interpolation, or slow roll-off SINC interpolation or something else. Linear-phase or minimum-phase. That sort of thing. I did a quick search but did not readily locate anything. Thanks.

This is what I have Ken:

polynomial-1
Polynomial interpolation. Most natural polynomial interpolation for audio. Only two samples of pre and post-echo.

Frequency response rolls off slowly in the top octave. Poor stop-band rejection and will thus leak fairly high amount of ultrasonic noise.
These type of filters are sometimes referred to as “non-ringing” by some manufacturers.

Only use with integer upsampling
 

TNT

Member
Joined 2003
Paid Member
I'm totally with you on the progress of improved sound quality of the CD. I think the "perfect sound forever" was probably based on the theoretical "promise". But CD realisation could not reach the theory at that time. Now it almost can. My point is that one fault may compensate for an other fault - that might be why NOS is preferred by some. The other aspect was that is not wise to stick with a "faulty" unit in a chain if the goal is ultimate reproduction fidelity.

To be able to judge a component in a chain and be sure if it is a correct reproducer or not, it is required that the rest of the chains components are perfect or this is only a chase of ghosts - as I see it. Therefore for me, who search for the perfect system, I can not accept a, at conception, "faulty" unit as a worthy chain member. NOS is such a faulty unit - for me.

Yes, there are differences between DACs - but why assume that it is the technically inferior unit that is "the one" :) it is so illogical. Yes, it might be best "sounding" at that time, in that system (A) - but an other system (B) with a not "faulty" DAC will have greater potential once one have cured the other real faults - which the two systems probably have in common. Every time one improve system (A) with a better component other than the DAC - it will sound worse. The same improvement in system (B) will lift this system to the next level. While poor (A) is anchored by it's faulty DAC....

//

Hi, TNT,

I'd say that there currently are a fair number of commercial DAC which are essentially 'theoretically' perfect. From it's objective specifications, we've had "perfect sound forever" since 1983, when CD was introduced. Except, for many of us, our ears have informed us otherwise. While 'typical' OS digital playback has subjectively much improved since then, (which, makes me wonder how "perfection" was improved on ;)) many of us still hear some important sonic character differences between OS and NOS playback. Some of which, but not all of which, favor NOS.

So, we DIY hobbyists are investigating an aspect of what else in the playback chain may not be correct or not performing at a sufficiently high level. As subjectively judged by listening. Low, and behold, we appear to identified that most OS FIR interpolation-filters do not perform at a sufficiently high level. What, subjectively, appears required is either, an FIR filter which performs much closer to an perfect SINC-function than is typical (which, of course, can never actually be perfect), or to dispense with OS interpolation-filtering altogether and go NOS.

What remains, is to test whether the highest performing 176.4KHz interpolation, of which I'm presently aware, can result in yet better sound. Our next experiment.
 
It probably wont if a "NOS mentality" has governed the ADC design. And once there it can never be washed out.

//

Perhaps I am missing something. An OS file is generated from an NOS file, hence if the NOS file is corrupt so will be the OS file generated from it. Its the NOS file created by line sampling, and one being theoretically concluded perfect in the digital domain, that in being perfectly reproduced in the analog domain (as squared steps) that is thereupon the equal of such alleged theoretical perfection. Any variance from squared step reproduction are imperfections that require justification of having some value.
 
Let me summarize what I understand so far, and some questions:
- OS in itself should not result in any difference from NOS, provided there is no digital reconstruction filter (n times repeated samples), and NOS sinc droop is properly handled

Oversampling is, essentially, synonymous with digital signal-reconstruction. At least, as far as it's application to audio DACs.

It seems there should be little, if any, subjective difference between OS and NOS. Both result in the ultrasonic filter bands being suppressed, and so, the signal being reconstructed. However, OS does a MUCH more effective job of suppression than does NOS. Remember, NOS simply means there is no digital reconstruction filter employed at all. Which means that the ultrasonic image-bands are mostly suppressed by human ears. As DIY hobbyists, practical problems would severely impede performing the well controlled experiments it seems would be required to determine the root mechanism behind OS/NOS sounding different. It may be because OS more correctly reconstructs the signal, we simply didn't find out.

- HW based (real time, limited number of taps) reconstruction filters cause bigger subjective difference than advanced post-processing SW filters

More or less correct. However, an offline S/W interpolation-filter like the PGGB is more of a pre-processing solution. Also, some software interpolation-filters can operate in real-time. Such as, HQPlayer, and SoX.

- the effect of a digital reconstruction filter is essentially a low-pass filter at the Nyquist frequency in the analog domain. Not a 'guessing' between the samples.

Correct, there is no guessing. Interpolation is simply a time-domain view of digital signal-reconstruction, while filtering is a frequency-domain view of the exact same operation. An non-interpolated band-limited stream of samples already contains all of the samples it needs to perfectly describe the original signal. The only purpose of the interpolated samples is that they are what is necessary to make the ultrasonic image-bands go away. Since samples are inherently discrete points in time, filtering requires that additional samples be added with their only purpose being to smooth the stair-stepped waveform exiting the DAC chip.

- The effect of increased jitter in OS should be evaluated - has it been discussed before?

I concur. Jitter is an legitimate area for DAC improvement. It just turned out not to be the cause of why OS/NOS sound characteristically different.

- The effect of phase response of the reconstruction filter should be studied as well - is there a zero phase shift DF?

This is outside the scope of our investigation, but certainly is a worthy subject for some other investigation.

- NOS with sinc droop correction filter in the analog domain also has phase shift - which is better: with or without?

Personally, I feel that flat frequency-response (tonality) is more important than phase-response. So, I prefer to analog EQ the ZOH based treble droop, and not worry about the phase. That is just my preference. Others may feel differently about it. :cool:
 
Last edited:
Perhaps I am missing something. An OS file is generated from an NOS file, hence if the NOS file is corrupt so will be the OS file generated from it. Its the NOS file created by line sampling, and one being theoretically concluded perfect in the digital domain, that in being perfectly reproduced in the analog domain (as squared steps) that is thereupon the equal of such alleged theoretical perfection. Any variance from squared step reproduction are imperfections that require justification of having some value.

What you're probably missing: There's next to zero NOS ADCs out there.

What that means: The ADC will probably run at at least 4x oversampling, more likely to be more than that (to alleviate the stringent requirements on the analog low pass filter before the ADC). Hence the output of the ADC will have to be downsampled and that will require some form of digital low pass filter. So the original is a NDS file - a non-downsampling-file, but the hardware (the ADC chip) will most likely have done the downsampling and low pass filtering for you once it reaches any sort of storage hardware, be it a soundcard and a computer or a hardware recording device.

So, what does that mean? If you really want to exclude hardware implementations of digital low pass filters you would want to avoid those oversampled ADCs in the first place. How to do it' I don't know. Is it relevant? I tend to say no, because I guess that 99% of all available recordings did use the OS ADC technology and hence some form of digital filter in the first step. Any further steps like eq - don't even get me started on these...
 
This is what I have Ken:

polynomial-1
Polynomial interpolation. Most natural polynomial interpolation for audio. Only two samples of pre and post-echo.

Frequency response rolls off slowly in the top octave. Poor stop-band rejection and will thus leak fairly high amount of ultrasonic noise.
These type of filters are sometimes referred to as “non-ringing” by some manufacturers.

Only use with integer upsampling

Thanks, Doede, I appreciate it.
 
...Yes, there are differences between DACs - but why assume that it is the technically inferior unit that is "the one" :) it is so illogical. Yes, it might be best "sounding" at that time, in that system (A) - but an other system (B) with a not "faulty" DAC will have greater potential once one have cured the other real faults - which the two systems probably have in common. Every time one improve system (A) with a better component other than the DAC - it will sound worse. The same improvement in system (B) will lift this system to the next level. While poor (A) is anchored by it's faulty DAC....

//

TNT,

Actually, we made no assumptions that one approach was superior to the other. In fact, I mentioned near the beginning of the thread that I hear each as having it's subjective advantages over the other. We simply were looking for what was making typical OS and NOS DACs sound characteristically different, when it seems they shouldn't. :confused:

After appearing to have identified the reason, I now find it interesting that very-high performance OS interpolation sounds much closer to NOS, than it does to 'typical' OS.
 
Last edited:
More or less correct. However, an offline S/W interpolation-filter like the PGGB is more of a pre-processing solution. Also, some software interpolation-filters can operate in real-time. Such as, HQPlayer, and SoX.

AFAIK that just boils down to how many samples are included in the sinc (or whatever) recinstruction algorithm. The more samples you take into account, the more delay you will introduce into a filter directly inserted into the replay chain. The more samples that lie before the sample currently being played - well that increases the delay. In a way the reconstruction algo wants to "look into the future" - which none of us can. So it will delay all incoming samples referred to the outgoing samples by the amount of samples it wants to look into the future. If you would insert the PGGB algo into your realtime playback chain there would be a delay in the order of around a minute. If your CPU can even do the calculations in realtime.


Correct, there is no guessing. Interpolation is simply a time-domain view of digital signal-reconstruction, while filtering is a frequency-domain view of the exact same operation. An non-interpolated band-limited stream of samples already contains all of the samples it needs to perfectly describe the original signal. The only purpose of the interpolated samples is that they are what is necessary to make the ultrasonic image-bands go away. Since samples are inherently discrete points in time, filtering requires that additional samples be added with their only purpose being to smooth the stair-stepped waveform exiting the DAC chip.

You still need some kind of analog reconstruction filter to make the analog waveform look "not like a staircase". Which it probably wasn't when recorded. So a staircase form of reproduction is incorrect - unless you recorded one, which is very unlikely. The oversampling aspect simply makes the demand on the analog reconstruction filter way less stringent because with proper oversampling there appears to be a big gap between the signal and any aliasing images.
I concur. Jitter is an legitimate area for DAC improvement. It just turned out not to be the cause of why OS/NOS sound characteristically different.

This is outside the scope of our investigation, but certainly is a worthy subject for some other investigation.

I thought that at least in theory OS DACs would be less prone to clock jitter, because of the averaging effect of having more samples available per time unit?!? Probably someone with more math insight could chime in please, but this one feels so logical

Personally, I feel that flat frequency-response (tonality) is more important than phase-response. So, I prefer to analog EQ the ZOH based treble droop, and not worry about the phase. That is just my preference. Others may feel differently about it. :cool:

Well, you say something here that I can totally agree upon. It's small changes in tonality (and spacial resolution) that let me decide where the flaws were in the latest echo listening test. I already did a first round of retest and more to come by the way - so let's see if my listening results were not a lucky case of correct guessing...

Regarding the changes in tonality: These are so small that even the tiniest amount of EQing will totally swamp them and then there's the question of whichever is "correct" in the first place.

My theory: A lot of the NOS DAC proponents have been basing their choice of whichever is better (NOS/OS) on these small changes in tonality and went off in a certain direction, assuming that the NOS sound was the correct sound and never did any research if probably some other components in their chain could be the much more easily fixed culprit. Just an assumption/theory...

As a side note: THD/IMD in amplifiers AND speakers will change the tonality by a good amount. And these distortions might add/cancel with THD/IMD in the signal presented to the amp/speaker depending on their magnitude and phase. But this is a total guessing game without measurements.
 
Last edited:
What you're probably missing: There's next to zero NOS ADCs out there.

What that means: The ADC will probably run at at least 4x oversampling, more likely to be more than that (to alleviate the stringent requirements on the analog low pass filter before the ADC). Hence the output of the ADC will have to be downsampled and that will require some form of digital low pass filter. So the original is a NDS file - a non-downsampling-file, but the hardware (the ADC chip) will most likely have done the downsampling and low pass filtering for you once it reaches any sort of storage hardware, be it a soundcard and a computer or a hardware recording device.

So, what does that mean? If you really want to exclude hardware implementations of digital low pass filters you would want to avoid those oversampled ADCs in the first place. How to do it' I don't know. Is it relevant? I tend to say no, because I guess that 99% of all available recordings did use the OS ADC technology and hence some form of digital filter in the first step. Any further steps like eq - don't even get me started on these...

Much of what the industry does escapes me Tfive. This is notwithstanding that my closest friends have frequently commented with variant enthusiasm that my reasoning ability was extraordinary for someone with a single digit IQ. By the way, I am not nearly as astute as the commentary you incorrectly attributed to me in post #1255.
 
What you're probably missing: There's next to zero NOS ADCs out there.

What that means: The ADC will probably run at at least 4x oversampling, more likely to be more than that (to alleviate the stringent requirements on the analog low pass filter before the ADC). Hence the output of the ADC will have to be downsampled and that will require some form of digital low pass filter. So the original is a NDS file - a non-downsampling-file, but the hardware (the ADC chip) will most likely have done the downsampling and low pass filtering for you once it reaches any sort of storage hardware, be it a soundcard and a computer or a hardware recording device.

So, what does that mean? If you really want to exclude hardware implementations of digital low pass filters you would want to avoid those oversampled ADCs in the first place. How to do it' I don't know. Is it relevant? I tend to say no, because I guess that 99% of all available recordings did use the OS ADC technology and hence some form of digital filter in the first step. Any further steps like eq - don't even get me started on these...
ADC devices could easily provide accurate samples at a required sample rate, i.e. 192kHz.

What that it mean? It means that the possibly a main reason for upsampling is noise shaping. There is a lot of ultrasonic noise in nature not related to music. By example fluorescent or LED lights. This is my answer.
 
Much of what the industry does escapes me Tfive. This is notwithstanding that my closest friends have frequently commented with variant enthusiasm that my reasoning ability was extraordinary for someone with a single digit IQ. By the way, I am not nearly as astute as the commentary you incorrectly attributed to me in post #1255.

I am so sorry, that must have been a copy/paste mistake. unfortunately I cannot edit my post anymore.

The quotes have been referring to Ken Newton's post #1251! If any of the mods would like to correct this post, please help.
 
I thought that at least in theory OS DACs would be less prone to clock jitter, because of the averaging effect of having more samples available per time unit?!? Probably someone with more math insight could chime in please, but this one feels so logical
If oversampling preserve original sample values (most of cases), calculating jitter is very simple. It can change otherwise, but I am not in position to evaluate such effect.