Measuring the Imaginary

I listen mostly to early CDs, meaning 80s CDs and 90s CDs. I find they sound excellent. Is it because tapes had not deteriorated or did they have a permanent digital master they only had to tweak for later reissues? 1994 was a particularly good year for remasters like Led Zeppelin and Stones when Virgin took them on abd reissued much of their catalog. 2002 was a good year for Dylan and Stones thanks to DSD CD reissues.
 
Last edited:
Any CD that needs deemphasis will by definition not be bit-perfect if deemphasis is applied. That's because the exact algorithm to do the deemphasis filtering may be different on different players and computers. However, it is a rare problem for most people.

Also, adjusting the playback volume level in PC software will also produce a non-bit perfect result. However, failing to reduced digital volume when intersample overs are present in the CD recording will result in some distortion during playback on pretty much all modern DACs (except Benchmark which does the non-bit perfect volume reduction during SRC, which it applies to digital inputs except DSD64).

Moreover, better DACs and or digital interfaces often use ASRC to reduce SPDIF jitter. Bruno Putzeys' $10,000+ Mola dacs have a very sophisticated ASRC for processing SPDIF/TOSLINK/AES. Processing by ASRC produces on non-bit perfect result.

In addition, some players when set for no volume reduction or other processing produce low level distortion in the digital stream, which implies a non-bit perfect result. For example, IME Foobar2000 is one such program that can add low level distortion.

All ESS dac chips operating in the default (single clock) mode apply ASRC internally to all digital inputs. Thus they are non-bit perfect at that point in the processing.

We could go on and on...

However, what we can say that is that its generally possible to extract all the digital audio data for each track on a CD without error. That does not necessarily include non-audio subcode data.
 
Last edited:
Also, adjusting the playback volume level in PC software will also produce a non-bit perfect result.
So how does a PC or other IA OS adjust the volume, when you move that slide? An iPOD, when you spin that UI and see the bar shorten?

Something like, a signal that could modulate all 16 bits (ex), now gets stuffed into 15 bits with a 50% reduction in signal level? Would that be 3 or 6 db down from full blast?

And what of MSs goofy 0 - 100% scale? What happens to the # of full-scale bits you get left over, with that control at, say 10%? Some bunch around a conference room table mustve decided a db scale was far beyond their customers understanding...

I once had a Panasonic "all digital" receiver that - supposedly - reduced the power supply voltage to the PWM output stage, as a means of volume control, so all the bits were still there at reduced volume setting. One way to do it I gather...

With that in mind, what's the best architecture from ripped file to speaker? Is it relative sacrilege to use anyone's "digital" volume control, when a good analog potentiometer does the job at the "most-correct" point in the audio information flow?
 
So how does a PC or other IA OS adjust the volume, when you move that slide?
It multiples each sample by a fraction less than one. Depending on the math precision at which the arithmetic is carried out, and depending on the target output bit-depth, dither may be needed in an effort to correct for truncation distortion. For a 16-bit output bit-depth, dither should probably be used. Exactly what kind of dither might be debatable. For a 24-bit output depth it may not matter if the multiplication product is dithered before truncation since that's already going to be down in the dac noise.

However, there is a special case. When it is desired to reduce the volume level by 6.02dB, that is a divide by 2 operation (multiply x 0.5). It can be accomplished by a bit-shift, which retains the original bit pattern, but shifted by one place. So a 16-bit CD sample would expand 17-bits (which is called bit-growth). If being sent to a 24-bit dac there should be no need to dither since no truncation was necessary.
 
Last edited:
In addition, some players when set for no volume reduction or other processing produce low level distortion in the digital stream, which implies a non-bit perfect result. For example, IME Foobar2000 is one such program that can add low level distortion.
With RME's bit test files it can be shown readily that Foobar2000 is bit-perfect when all processing and leveling is turned off, quite as expected.
 
All ESS dac chips operating in the default (single clock) mode apply ASRC internally to all digital inputs. Thus they are non-bit perfect at that point in the processing.
But that is happening after the digital filters, that is, the filter's output stream is resampled but not the input stream.
1709459996306.png


And these filters are present in any DAC (except NOS) and they always destroy bit-perfectness by sheer definition.
 
...have you come to this statement after electrical measurements?
Sufficient A/B tests were performed to verify the source of the distortion/correlated-noise that sounds like distortion. Also, if you recall my original description of the troubleshooting process, it started after hearing what sounded like low-level distortion coming from the Sound Lab electrostatic speakers. One by one, hardware components were substituted with no effect on the distortion. The computer was then checked to see if Windows had reassigned the USB board as a default device. Still no source of the audible yet low-level distortion was found. Finally, PlayPCMWin was substituted for Foobar2000 and the distortion went away. However, that's not the end of the story. PlayPCMWin by default does no volume reduction of PCM before sending to the dac. However in the "detailed settings" box, it is possible to select from three fixed volume reduction settings for PCM. They are 2dB, 4dB and 6.02 dB. The first two sound slightly distorted, only the last one doesn't; I have already described what is special about that particular option.

The other thing that may be involved is that Foobar was using the ASIO component. PlayPCMWin uses WASAPI Exclusive Mode. I have seen before that Windows appears to have an entry point into the ASIO drivers provided by Thesycon. No effort was made to see if the Foobar ASIO component was successfully bypassing Windows Sound Engine.

BTW, I don't conclude that Foobar200 is unusually faulty. Only that with PlayPCMWin it is possible to avoid a low-level distortion. Would most people hear a difference on their system? Maybe not. My system has undergone fairly extensive efforts to clean it up which have been ongoing over a long period of time. The symptom with Foobar was only one small problem that hadn't been found up until that point..
 
Last edited:
But that is happening after the digital filters, that is, the filter's output stream is resampled but not the input stream.
It is possible turn off ESS oversampling filters, as Benchmark does in DAC-3. They use their own FPGA instead. The ASRC can be turned off too
And these filters are present in any DAC (except NOS) and they always destroy bit-perfectness by sheer definition.
I did say we could go on and on. I was thinking of the complex processing common in oversampling dacs. However, I thought the point was sufficiently made by then that "bit-perfect" usually only remains intact for a limited time.
 
BTW, I don't conclude that Foobar200 is unusually faulty. Only that with PlayPCMWin it is possible to avoid a low-level distortion. Would most people hear a difference on their system? Maybe not. My system has undergone fairly extensive efforts to clean it up which have been ongoing over a long period of time. The symptom with Foobar was only one small problem that hadn't been found up until that point..
Since as @KSTR mentioned Foobar2000 is bit-perfect if processors or volume control are not used the distortion you heard is not the fault of Foobar2000 but either your settings or your system.