DAC blind test: NO audible difference whatsoever

Would not surprise me at all. I hear these DACs at shows every year. They all have different sonic signatures.

Yes, they sound different. We agree on that.

When people suggest that only maybe dogs and bats could tell a difference, they are thinking in the wrong terms. it other words, it isn't primarily frequency response that makes DACs sound different. Most people can learn to notice or pay attention to what is different between DACs, or at least some of what is different, with a little coaching. Many or most people don't notice any difference otherwise.

By way of analogy, we know that people of unfamiliar races often are perceived as looking almost completely alike. That happens if one doesn't know what differences to look for. Brains learn how to recognize patterns, but it can take some time and effort to learn how to notice small differences in novel types of patterns.
 
Last edited:
That tells us little. What does the impedance curve look like? One always needs to be concerned about the flatness of the impedance in any speaker with an XO used with SE amps (and their typical high output impedance).

dave

They are also 35W monoblocks. More than enough power. They outclass my older 600W/chnl modded JC-1 monoblocks. Better bass with 35W than 600W of SS.

Steve N.
 
I told you that synchronous clocking on ESS DACs, bypassing their 'wonder-SRC' works & sounds better - not playing around with cables.

Works better as in measures better? Or works better in what way?

Also, it would be interesting and helpful if you could give is a make a model number of an ESS based DAC that allows bypassing of the built-in SRC. Ane then if you could describe how you connected it to your other clock without using cables or without any effects from cables, that would be helpful too in case someone would like to replicate the experiment. For the same reasons, what clock source would you recommend using for the demonstration (make and model please)?

It would be reallyl great if you could record the ESS DACs output both ways, with and without SRC into a high quality ADC. Then post the files along with the hi-res source file you used so we can all hear what you are talking about.
 
Actually, I did not use the word transparent. If I had meant transparent I would have said it. Please don't attempt to put words in my mouth.
No, you did not use the WORD transparent but in terms of jitter (which is what transparent refers to above) you said that "it does not create any problems" - does this not mean it's inaudible or are you playing word games here?

"have very little jitter activity below 1 Hz, and it is too low frequency to affect the sound of music, therefore it does not create any problems
 
frugal-phile™
Joined 2001
Paid Member
Maybe you are thinking of OTL amps? My SET amps have 4 ohm transformer winding.

Steve, you are leading me to believe that althou you may understand DACs very well, you do not understand the amp/Speaker interface.

Most SETs have a high output impedance and if the XO in your speakers causes the impedance curve to deviate substantially from flat (ie most speakers with XOs) then that speaker impedance interacts with the output impedance in such a way that it is like a fixed tone control set to not-flat.

dave
 
Yes, they sound different. We agree on that. <snip>

When mentioning dogs and bats, I didn't only think about high frequencies. Although it is most often the high frequencies which are mentioned as the deficiency of digital reproduction compared to analog. I was also thinking about spatial qualities, which may or may not be related to jitter (I'm not aware of any research into this). I'm fairly certain that bats and dogs have a spatial hearing apparatus that is much more developed than ours.

But it remains, though, that noise and jitter etc in a device such as the Benchmark Dac2 or Dac3 are many, many, many decibel levels below what is commonly assumed to be human hearing thresholds. If the Benchmark indeed has a sonic character, it is actually fairly extraordinary, and the implication is that current theories about human hearing (and current measurement practices in audio) probably are insufficient. I would therefore encourage you to explore this systematically and publish blind test results, if you are able to do so. I would love to be proven wrong in this regard.
 
No, you did not use the WORD transparent but in terms of jitter (which is what transparent refers to above) you said that "it does not create any problems" - does this not mean it's inaudible or are you playing word games here?

"have very little jitter activity below 1 Hz, and it is too low frequency to affect the sound of music, therefore it does not create any problems

DACs are not transparent for a variety of reasons. Jitter is only one factor. Jitter in the Benchmark DAC-3 may or may not be completely inaudible, I don't know. I do believe that jitter isn't its worst problem, it may be the least of its problems. But, I could not and would not say that that DAC-3 is transparent on any account. I have one and I don't think it is transparent.

Regarding the original graphs you posted from Bruno showing some ripples similar sounding to jitter, it looks like those were artifacts not caused by incoming jitter, but caused by the relative difference in clock frequencies of the transport and DAC clocks. With the two clocks off by exactly 1 Hz, some artifacts were produced that Bruno suggested were related to filter bandwidth of the ASRC, apparently the low frequency corner in particular. The graphs showed that if the transport and DAC clocks were off by about 25 Hz from each other, that was enough to suppress the jitter-like ASRC artifacts. However, Bruno said the results were from a chip that was never released, although he said the ASRC made it into the Sabre DACs. He did not say if the low frequency corner was programmable or changed in the version of the ASRC algorithm that made it into the Sabre DACs. If it is settable with a control register or if it was changed in some other way in the Sabre DACs, it could be that the problem of artifacts being produced with clock frequencies being off by 1 Hz was fixed. I don't know, and you have not indicated that you know either. Anyway, I want to be clear that artifacts due to clock frequency differences are not the same as jitter rejection, they are two different mechanisms that can produce similar sounding results but how they do it is not the same at all.

Also, my interpretation from looking at the Benchmark and Stereophile measurements of DAC-3 is that Benchmark has done a few things to make the performance of their Sabre-based DACs better than the performance of Sabre evaluation boards or designs based on the the evaluation circuits. Therefore, it is not clear to me even if there were some change or advantage from turning off ASRC with some clock sources with some Sabre DAC implementations, if any of that would also apply to the Benchmark implementation. Until somebody can show that it does, I will withhold judgement on the matter.
 
Steve, you are leading me to believe that althou you may understand DACs very well, you do not understand the amp/Speaker interface.

Most SETs have a high output impedance and if the XO in your speakers causes the impedance curve to deviate substantially from flat (ie most speakers with XOs) then that speaker impedance interacts with the output impedance in such a way that it is like a fixed tone control set to not-flat.

dave

The 4 ohm transformer winding takes car of that. These speakers were designed as high-efficiency. I have used them with SS as well as my SET amps.

Steve N.
 
the implication is that current theories about human hearing (and current measurement practices in audio) probably are insufficient.

I think it is pretty clear what the role of the basilar membrane is and how it affects hearing. That has been studied quite a bit and the research is pretty consistent. We aren't finding evidence there were mistakes made there.

What was very poorly understood at time most hearing research was being done is the role of brain processing of nerve impulses arriving from the ears. It was however known to be extremely complicated and hard to research. Due to recent advances in cognitive psychology and related neuroscience we are starting to learn more about aural processing in brains, but its probably going to take decades before we have a pretty complete picture.

One of the things that has limited the scope of research to date is that hearing, or listening research as I prefer to call it when referring the the brain's role in processing, is very complicated and costly to undertake and there is no clear justification for funding it and dedicating scientific careers to it. For the most part nobody cares what some people are able to hear, except maybe a few people on internet forums who are either curious about it or who like to argue about it. The bottom line is those few people in internet forums don't want to pony up several tens of thousands of dollars to even do a preliminary study to get started with. In addition, no government agency sees a need to fund such research. There is no obvious military or social value likely to result that would justify the expenditure.

So, people are left on their own to figure out what they want to believe. What they do end up believing often seems to be based on what they hear and some unstated assumption that they probably perceive reality as it truly is as much or more than most people, so claims that others can hear much more feels wrong, and we tend to go with how it feels to us.

The issue is very much complicated by some confounding factors. Two big ones are: (1) people can imagine hearing differences when there are none and they can't always tell whether their perceptions are real or imagined, and (2) ABX testing as it is currently implemented is probably not the most sensitive blind testing protocol available, but it does seem to be the only blind protocol with software and hardware readily available for use.

Some things we need to get going with research in this area include better blind testing software with high sensitivity for small differences, and some kind of characterization of standards for testing systems that are shown to sufficient for revealing small differences in DUTs. Developing better test software and proving what standards are needed for audio systems to be used for human listening testing is some of the first research that should probably be undertaken. Probably, it would cost, as others have suggested, several tens of thousands of dollars to get started. It would also be a lot of work, with many potential pitfalls to avoid. Does anyone what to chip in some of their lunch money into the effort? If so, Earl Geddes might be interested in doing some more work of that nature.

Another problem would likely be searching out talented listeners, and transporting them to somewhere where they could be formally tested. Either that or bring the test system with a proctor to them. How much would cost to put together 10 standardized listening systems, and fly 10 proctors around the country to test candidates that seem like they might be good prospects as super-listeners? Easy to burn through money very fast with a program like that.
 
Last edited: