Toslink popular?

Generically speaking, any interface which provides for the DAC to function as the clock master is to be preferred over one which permits only the source/transport to function as the clock master, at least, as far as clock jitter is concerned. Or, if the source/transport is designed to provide a very technically correct implementation of a separate clock signal transmission path, over to a properly matching DAC box.

The reason Async. USB is preferable to HDMI is because USB is an established PC interface. PCs form the core of the more affordable, and flexible music streaming, and file server systems. Although, dedicated music streaming systems seems to be rapidly becoming more affordable than they used to be.
 
IMO the design purpose of the network interfaces (AES67, Dante) is being able to stream to many distant precisely time-synchronized renderers (such as speakers around a cinema theatre where ethernet switches greatly simplify the wiring). It has no advantage for a local setup where speakers are not physically distant and are all powered from one device/location, allowing for a single master clock close to a multichannel DAC/poweramps with digital input.
 
Well, since USB is used only for the "last mile", I assumed we were talking about the final link before the DA conversion. And there the clock jitter should always be tried to minimize.

ASRC cannot by principle get the same results as clean clock directly by the conversion element (be it line-level or power-level DAC).
 
I’m a bit confused as to what exactly you are asserting, because it seems we are essentially saying the same thing. Which is, that the only place where digital audio clock-jitter matters is at the DAC. The most stable clocking without costly, heroic resort to independent oven-controlled clock generation, and accurate impedance matched signal transmission, is to implement the clock generation local to the DAC. The problem with that is maintaining data synchronization, at the average rate of transfer, between the data source and the DAC box. As the source then has no way of maintaining synchronizing. Asynchronous USB provides such data transfer synchronization, that’s PC centric and thus supports inexpensive 3rd party PC application streaming software, and is within the paradigm of the DAC functioning as the clock master.
 
Last edited:
...as soon as ASRC and clock generators with low jitter appeared in DACs, we can immediately say that jitter has ceased to be a problem worth paying attention to...
Doesn't turn out to be quite so simple as that. There are a number of issues, such as ASRC implementation details, and the part of clock jitter usually described in the frequency domain as "close-in phase noise."

ASRC attenuates jitter at the cost of adding some low-level distortion. This assumes the ASRC is running from very clean power and that it has a low-phase noise crystal reference signal rather than using receiver PLL as the reference. A detailed explanation of ASRC operation can be found at: https://www.diyaudio.com/community/threads/asynchronous-sample-rate-conversion.28814/

The thing about jitter and also about Vref in dacs is that all dacs need an analog time reference and an analog voltage reference. Errors of either reference become convolved with the audio signal, as viewed in the frequency domain. The dac clock is actually an analog piece of circuitry. There is an analog feedback amplifier and crystal used to make a sine wave oscillator, there there is a 'squaring' circuit to make the sine wave into a square wave shape.

So far, the best way we have to provide the best clocking and lowest added distortion to a dac is to use asynchronous USB. However, that by itself is often not enough. There is often conducted EMI/RFI noise over USB bus, so galvanic isolation of USB from the dac circuitry is a first line of defense. USB boards also emit radiated EMI/RFI of their own due to all the digital processing. Therefore shielding/physical-isolation of the USB board from the dac board can also play a role in helping to reduce unwanted noise coupling into the dac.

There is much more that can be said, but maybe the above will help clarify some details.
 
Doesn't turn out to be quite so simple as that. There are a number of issues, such as ASRC implementation details, and the part of clock jitter usually described in the frequency domain as "close-in phase noise."

Let's just say that from the very beginning I introduced some inaccuracy into my position. I started speaking for ASRC without specifying what I mean by this phrase. I think I should clarify that I mean specialized microchips for receiving SPDIF with ASRC such as CS8422, and more modern ones. The microcircuit itself contains all the necessary components that reduce the problem of jitter in the SPDIF channel to a negligible effect on the final result. It is not ASRC itself that suppresses jitter, but a set of measures taken by the developers in the chip, which is designed to receive and convert SPDIF.
I’m not much of an expert in terms of jitter and its impact on the final result, but I have a suspicion that modern DACs that use synchronous or asynchronous data reception are not stuck with the jitter problem, but with the limitations of the analog part of the DAC. As I understand it, today engineers have reduced the influence of jitter on the DAC output signal to such an extent that you can no longer pay attention to it.
 
The microcircuit itself contains all the necessary components that reduce the problem of jitter in the SPDIF channel to a negligible effect on the final result.
Not IME. I have done testing of a number of ASRCs under different operating conditions. It is as I said. Power supplies, incoming jitter, and reference clocks all affect ASRC chip performance.

That said, the effects could be negligible if there are other, bigger problems in the dac which mask smaller ASRC problems.

EDIT: If we get to talking about dac chips we will find similar issues there too. Making a really good modern high performance dac is not a trivial undertaking.
 
Last edited:
I think it's also worth mentioning that introducing an ASRC generally means hard clipping at 0dBFS, no more headroom for intersample-overs... This is probably why some DACs with ESS chips have mediocre jitter over SPDIF, they disable the DPLL on purpose for better digital headroom. Obviously it would have been better to go with AKM in the first place then, but I suppose that kind of wasn't an option for a good while after the fire...