Geddes on Distortion perception

But following this law of ears can't tell: how can anybody proof nobody can hear lower orders of high harmonic distortion if the only proof is found by listening to the system? This is really hard to follow, if not impossible.

...

But I don't understand at all where you're coming from when someone changes 1 parameter and
gives a thumbs up by means of a listening session and you consider it invalid, while to me you're doing the same thing. Oke, the population is larger, but you might want to explain more about what is solid or scientific proof to you.
Scientists know that nothing can ever be proven, but things can be disproven. I could never prove that Gm is absolutely correct, but I did disprove that THD is. We don;t prove things like in mathematics, we look at evidence and how that evidence falls down on the problem at hand leaning us to believe that truth is one way or the other. But we must also look at the nature of that evidence. Is a single "thumbs up" evidence? To me it is such weak evidence as to be no evidence at all. I try and only use evidence that has been obtained in controlled scientific studies, like Lidia and I do and Toole and Olive. Those studies have been carefully designed to minimize bias and extraneous nuisance variables that can nullify the validity of the evidence. If your "evidence" contradicts given gained in a rigid scientific manner then you really must consider the validity of that evidence. When I see statements here that go against solid scientific evidence then I feel compelled to interject - especially when the thread contains my name.

Guesses are not evidence. When someone makes a guess they should ask "Is there any evidence to support this guess?" All too often a guess is made and it sounds good to someone else who supports it, again without any supporting evidence. This then establishes crazy ideas as virtual facts and someone has to bring that pie-in-the-sky idea back down to earth and disclose it for what it is. It's a tough job, but somebody has to do it.

Because I have been doing this so long > 50 years, I have gained a lot of experience in what has been shown to be true scientifically. You can take what I say or leave it, but you can be assured that it is well supported by my experiences.

I used a bass driver as an example to make an easy case. Almost no system has impedance linearized in that frequency range.

I'm not trying to shoot holes in your view of system design, but in general speaker design done here and by the rest. It's another point of view which seems hard to get across and seems to be a major point of disagreement between us, while it really shouldn't be. I assume.

We both use compression drivers so the question was interesting, to me at least.
Please state the use case, or the fault in the calculations because there's not much that proves otherwise.

Link again, to your convenience:
Damping factor

Thanks for your time and effort, either way I'll get out of your hair

I simply do not follow your argument about the impedance curve and amplifier damping factor (DF). What is it that you are trying to claim? How one deals with the amplifiers output impedance is straightforward engineering, not much subjective opinion and/or guesses is required. Simply state the case and analyze like any other engineering problem.

Audibly, in-situ, the amp DF will have almost no effect on the woofer, but it can have a huge effect on the crossover since these must assume some amp DF and if it is different than what is assumed then it won't work correctly anymore. That's the point that I was making and no other.
 
The biggest limitation people forget about many studies is much assumptions are made to simplify the problem. Thus the results may seem to make sense, but once you apply real world conditions, suddenly things are not so black and white.
Lots of measurement methods were used in the days where performance of electronics were much worse than today. Nowadays, the electronics in test equipment are getting much closer to device under test that it is becoming more and more difficult to find out whether the measurements are usable or not, and interaction between the test equipment and device under test are often ignored.
Software data analysis is becoming a powerful tool to look at data from a different perspective, but the verification and validation process are generally not thorough, and rely on field testing to finally find its place. To make things more complicated, the coding practices and configuration control process can quite often allow bugs to pass through, especially in cost sensitive commercial products. I am not sure whether it is still the practice or not, but back in the days when flight control systems started to get digital controls, commercial aircraft required multi channel software written by totally isolated team for each channel to eliminate common mode bugs, a more cost saving practice than the complicated fault management and testing process used in military software development.
So detection of a problem in audio still needs to start from what you wish to improve in listening experience before you try to quantify through measurements what specific aspects one wishes to improve, and how to pinpoint where in the system improvements can be made.
 
Audio is engineering no different in principle from aerospace control systems. You set a target, design to it and then test your success. Setting the target is to many the hardest problem, but not really for me as I am pretty clear about and express exactly what my goals are and why. The rest is engineering and quality control.
 
Ah, but the development process is quite different. When we do not expect improvements and use pretty much off the shelf components, we can have detail specs for components in the aero space industry. In audio, it is hard to get a spider that meets a specific stiffness curve and other resonance characteristics as well as service life data. I had asked suppliers what stiffened data they had in existing products, I get this blank look....
 
In audio, it is hard to get a spider that meets a specific stiffness curve and other resonance characteristics as well as service life data. I had asked suppliers what stiffened data they had in existing products, I get this blank look....
That's an easy enough problem to solve - just get many different spider samples test them yourself. The problem is guaranteeing that the supplier makes it the same way every time. Still, doesn't seem to be a problem for many manufacturers - there are many speakers made and sold today that measure pretty damn close to the ones made and sold 10+ years ago.
 
The problem is when trying to develop a new driver, you would like to get good analysis in comparison with existing parts to see if there is any existing part that one can use. With spider and surround parts, if data were available, you could guess whether all the parts need to be redesigned or existing parts can be used. I guess when the time comes, I can just send them a spec and see what how they respond.
 
I find it very interesting to follow this discussion.

One question, dr. Geddes. By now, several studies have proved you right that simple THD numbers in loudspeakers don't correlate well with listener perception. But I believe you were the first to say so in a decisive manner. Kudos! What I do wonder, though, is whether your results and the results in the subsquent studies warrant the statement that distortion as such is not that audible? Could it be that the traditional methods for measuring THD simply haven't been advanced enough to capture the real "distorting behavior" of loudspeakers?


Here's a recent AES paper, for example, which proposes a new multi-tone measurement system for identifying nonlinear distortion in loudspeakers: AES E-Library >> Method for Objective Evaluation of Nonlinear Distortion


Could it be that results for distortion detection would have been different using this type of distortion measurements? And how does this kind of multi-tone distortion relate to the Gedlee metric?
 
One question, dr. Geddes. By now, several studies have proved you right that simple THD numbers in loudspeakers don't correlate well with listener perception. But I believe you were the first to say so in a decisive manner. Kudos! What I do wonder, though, is whether your results and the results in the subsquent studies warrant the statement that distortion as such is not that audible? Could it be that the traditional methods for measuring THD simply haven't been advanced enough to capture the real "distorting behavior" of loudspeakers?


Here's a recent AES paper, for example, which proposes a new multi-tone measurement system for identifying nonlinear distortion in loudspeakers: AES E-Library >> Method for Objective Evaluation of Nonlinear Distortion


Could it be that results for distortion detection would have been different using this type of distortion measurements? And how does this kind of multi-tone distortion relate to the Gedlee metric?

Thanks

I tire of people saying that I don't believe that nonlinear distortion is audible. I have never said that and I don't believe it's true. Some is and some isn't, but without a valid metric we cannot tell which is which.

The GedLee Metric Gm was shown in a blind test to correlate to perception to a high degree. Could some other test do as well? I see no reason why not, BUT, and this is the key point, they must be shown to be better in exhaustive psycho-acoustic tests. Until they have been proven to work they may or may not be better that THD or Gm.

Is multi-tone better? I don't know, has it been shown to be better? Does it take into consideration aspects of human hearing? I don't see how any metric that does not consider hearing could be better than one that does.

But the bottom line is has the proposed metric been shown to work. Olive has shown that coherence correlates well to subjective impression in headphones. Is this better than Gm? I don't know its never been tested. Clearly if it correlates to perception then it is better than THD or IMD, because they do not, but how it compares to Gm has not been done.
 
Thanks

I tire of people saying that I don't believe that nonlinear distortion is audible. I have never said that and I don't believe it's true. Some is and some isn't, but without a valid metric we cannot tell which is which.

The GedLee Metric Gm was shown in a blind test to correlate to perception to a high degree. Could some other test do as well? I see no reason why not, BUT, and this is the key point, they must be shown to be better in exhaustive psycho-acoustic tests. Until they have been proven to work they may or may not be better that THD or Gm.

Is multi-tone better? I don't know, has it been shown to be better? Does it take into consideration aspects of human hearing? I don't see how any metric that does not consider hearing could be better than one that does.

But the bottom line is has the proposed metric been shown to work. Olive has shown that coherence correlates well to subjective impression in headphones. Is this better than Gm? I don't know its never been tested. Clearly if it correlates to perception then it is better than THD or IMD, because they do not, but how it compares to Gm has not been done.

Thanks! Excellent reply!
 
Basically distortion, as we are used to thinking about it, is completely incorrect. This was further confirmed when we did a study of compression drivers published in JAES. In this study no one of about 30 subjects could hear nonlinear distortion up to the thermal limit of the driver - some 126 dB at the waveguide. This result was surprising and quite controversial, but it is holding firm as quite correct.

I'm curious if this could be learned? Based on my own preferences, it seems I am drawn to speakers that display little or no compression in the mid-treble, at least when measured at 70 and 90 dB reference points.

Given the brain's pliability, I wonder if I taught myself how to do this, and as a result, spend more on drivers than I would otherwise.

Best,

E
 
Hi Erik

As Olive has shown learning is an important aspect of evaluating. He has shown that trained listeners will come to the same conclusions as untrained ones, they can just do it quicker. I fully support this point of view.

"Compression" as you talk about it is a complex subject. Some call it dynamics. I think that it is an important aspect of audio, but alas there is no concrete definitive of this effect such that one can actually test it. Perhaps "thermal compression" is what you mean, but I have tried to test this in a dynamic situation and was not able to do so. With thermal, the time constants become a real issue and a complex one. But I think that thermal is just one aspect of the whole idea of "compression".
 
Hi Lee,

Sorry if I am using the wrong terms. I'm definitely not talking about thermal compression.

I'm talking about measuring the FR at two different levels to see if the system's voltage sensitivity changes at various input levels. This is routinely measured by the "Deviation from Linearity" testing done at SSN. Here's one example:

SoundStageNetwork.com | SoundStage.com - NRC Measurements: Focal Sopra No2 Loudspeakers

I have also seen from Linkwitz something which may be very different. I think he uses burst tests to see if there is evidence of a change in output as the drivers warm up. I can't find a link right now.

So these are two separate issues. The former is what I feel I am sensitive to, especially in the tweets.


Best,

E
 
Lee is not my name. My name is Geddes and Lidia is Lee so we created GedLee.

It would be paramount how long the signals played in those tests, because, as I said, thermal is all about time constants. Of course not all of the changes are thermal related, and that's the problem with that test. You cant tell what's thermal and whats excursion related. I could do a FR test so fast that no thermal deviation would likely be seen even at very high levels. But slow the test down and huge thermal differences would occur. I did not see any mention of test speed.

In general a compression driver would be highly immune to this kind of test. Compression drivers have 10-100 times better power handling than a direct radiating tweeter. When I did this kind of test I found that the crossover changed more than the drivers for compression drivers, but with direct radiators the tweeter had huge changes with power.

I would think that a nice test would be to set the playback level at 90 dB and test with a short test, then double the test duration, double again, etc. This would show the thermal times constants as well as the deviations from non-thermal issues. I repeat - with thermal it is all about time constants. (Those who understand the heat equation versus the wave equation will get that with heat everything is low passed so only the very long time constants matter. Sometimes as long as minutes to fully heat a magnet.)
 
Sorry Earl Geddes!

Thanks for the explanation.

Yep. Some of the tone burst tests I saw were really convincing. You can see the thermal effects take hold within a couple of cycles.

Makes sense, since with a compression driver, 70 and 90 dB at 1m is still far below 1 watt. Also interesting to try AMT's. The better one's have astounding dynamic range and power handling. I would expect them to perform well even when warmed.

You are the first person here I've talked to who understands that compression has thermal and mechanical issues which may arise comorbidly at relatively low outputs. :)

Best,


E
 
If your "evidence" contradicts given gained in a rigid scientific manner then you really must consider the validity of that evidence. When I see statements here that go against solid scientific evidence then I feel compelled to interject - especially when the thread contains my name.

Guesses are not evidence. When someone makes a guess they should ask "Is there any evidence to support this guess?" All too often a guess is made and it sounds good to someone else who supports it, again without any supporting evidence. This then establishes crazy ideas as virtual facts and someone has to bring that pie-in-the-sky idea back down to earth and disclose it for what it is. It's a tough job, but somebody has to do it.
Love it. :D

After having just spent time in another thread pushing back against some of the handed down audiophile dogma about exotic speaker cables, where nebulous and ill-defined changes in sound quality that aren't explained by the RLC properties of the cable and can't be measured are claimed, for example silver wire sounding "better" than copper wire... I can completely appreciate your position.

Sometimes it's necessary to push back on these sketchy beliefs to bring a dose of reality and science into the equation. If pie in the sky beliefs always go unchallenged then they will continue to propagate with a life of their own...
 
I don’t know if silver is better than copper or not, but there are differences in measurements among different cables. As a matter of fact, this is one of the mysteries I intend to get to the bottom of. Just last week a friend wanted to make a pair of power cables to test on my system to see if there is an audible difference or not. He spent much time to find an IEC plug that would fit into my system which he was satisfied with...
 
I don’t know if silver is better than copper or not, but there are differences in measurements among different cables. As a matter of fact, this is one of the mysteries I intend to get to the bottom of. Just last week a friend wanted to make a pair of power cables to test on my system to see if there is an audible difference or not. He spent much time to find an IEC plug that would fit into my system which he was satisfied with...

I have even less time for audible differences between power cables than I do for exotic speaker cables...

If you can hear a difference between one power cable and another you have a pretty cruddy power supply in the device the cable is connected to!

At least with speaker cables there is a mechanism for them to sound different - series resistance interacting with the speakers impedance curve.
 
Last edited:
I have experienced large differences, quite humiliating for an exotic power cable, to small differences, almost unnoticeable. But the problem always is, what measurement metrics do you use to show the difference! This is critical because if one wants to say the power supply is the crappy part, you have to show data to prove it to a supplier, and in a way which relates with the different power cables. I once did show data on capacitors to prove he was not delivering what he was supposed to, and they admitted.
Oh, recent visit to a new studio close by, the owner also used very specific power cables for the equipment as well as a separate 100amp line specifically for the control room.
 
Last edited: