Distortion in commercial amplifiers

My main gripe is why is everybody chasing low THD? THD is the least of our worries when designing amplifiers. Intermodulation, TID and plenty of other Non-Harmonic distortions are the ones that are way more audible than THD. There are plenty of tests showing that THD below 1% is very difficult to identify and there are plenty of highly respected and great sounding tube/valve amplifier designs which do not have vanishingly low THD that go towards proving the case.

This was a major question back in the rec.audio and hydrogenaudio days, often posed as, “why chase more zeros?”.

In the consideration of harmonics, it would be my view that the harmonic spectra that the average tube amplifier produces is different from that of solid state. In regards to the latter, lower distortion keeps the design on the safe side, since amplifiers often don’t perform exactly the same driving reactive loads. What’s more useful than a single number is a Bode plot of the distortion versus frequency. With that, it’s possible to get some indication of the type of feedback mechanism being applied.

Vanishingly low distortion is most often a good indicator of design methodology and low IMD, although it’s higher than the harmonic distortion. 0.001% THD indicates the summed powers of all harmonics are -100dB relative the the signal, and that’s considered inaudible. 0.0001% is even better from an academic viewpoint and surely the harmonic orders no longer bare any weight, and, should it’s performance degrade a bit under load there is margin for error.

At higher levels of distortion, say 1%, I think we mostly can see how THD can be useless, as the fundamental frequency and exact harmonic orders can make the distortion quite objectionable. 1% THD could indicate anything. It could entirely 2nd order with no higher harmonics, a mix, or it entirely be comprised of 3rd, 5th, 7th, 9th and so forth at equal magnitudes, but surely it could be audible. If the amplifier exhibits 1% distortion on a test load there is no margin for error with a load it really dislikes. I suspect this is part of the reason why some people end up buying/building a myriad of amps trying to improve their previous one.
 
  • Like
Reactions: 1 user
I recall that Stereophile uses a low-pass filter only on class D amplifiers. Using it on class AB would make the numbers equally dishonest. ;)

tombo56 and knutn - Class A achieves truly low distortion without the switching that can goof up class AB. I am more inclined to believe simulated THD <0.01% on class A than class AB.
Ed
Depends if its real class A or overbiased A/B AKA "push pull A"

BJT or Mosfet
Mosfet needs very high bias to get lower cross over distortion at high frequency.
So push pull class A really only has that slight advantage.
The high bias should reduce the crossover distortion.
Technically more A in the title but the B part is in a different place.

But would mainly depend on the frontend of the amplifier
if the slew rate and overall THD even justifies such high bias.

Regardless if a true class A actually did .01 or .001 distortion
at 20 kHz into 4 ohms.
Nobody would like it , or would be heavily criticized.

To achieve such low THD levels
The so called 2nd harmonic profile would be almost non existent.
Its a fundamental basic in actually achieving low THD
As with all other harmonics. In a graph it becomes
the so called " sea of grass" where THD is incredible low.
The " magic" second does not exist and neither does
all the other harmonics, hence such low THD

Arguments will always be around the so called " magic"
of amplifiers.
But its almost well known and accepted that high distortion
amplifiers are pleasing in the Bass and Mid region.

Otherwise depending on the recording source.
Digital or analog and year of recording, microphones used etc etc.
High frequency being justified
as sounding better has been shown many times
to be a combination of high slew rate and low THD.

But a majority of that even being justified is very low
since many are listening to old recordings, in digital
and even " analog" the high frequency microphone distortion
is miles above the amplifier.

Anyways, difference in THD to sim / real life and the article

They really pushed it, which I admire.
Testing at 20 kHz at 4 ohms plus noise.

In Sim , most them seem to have poor default settings
and seem to produce, very glorious good numbers.
And likewise most people sim @ 1kHz 8 ohms
and it does not include additional power supply noise.
Otherwise there is numerous ways to make a model
more realistic and measure it to achieve realistic numbers.
It is up to the user and the software to be adjusted.

And yes for once include 4 ohm tests at 20 kHz
Seems to be ignored as well at the other end of the bandwidth
at 100 Hz THD tends to be much higher as well.
And often " Non Magical" amplifiers actually have very high
2nd order " magic" distortion at low frequency.
 
  • Like
Reactions: 1 user
I took a brief survey of solid-state power amplifiers as reviewed by Stereophile. I chose one point, THD+N at 20KHz into 4 ohms at roughly 20V. These are demanding conditions. Stereophile's measurements are:

Code:
THD+N (%) at 20KHz at 12.67-28.3V into 4 ohms
---------------------------------------------
.003 Electrocompaniet AW 800M (*)

[/QUOTE]
https://electrocompaniet.com/products/aw-800-m-reference-power

Otherwise that is absolutely incredible amplifier
To get .003 into 4 ohms @ 20 kHz
with such high power is very difficult to achieve.

At a standard rating THD would dive further to another decimal point.
As advertised .0006 %
slew rate is around 270 V/us
Which is as far as I know, very close to what you can get
or cannot exceed with actual power transistors. around 250 to 300

The differential probably way up at 2000 to 3000 V/us
but the physical limit of power transistors has been hit.

What goes in is what comes out, and subjective opinions
and magical harmonics are just eliminated completely.

The rest of the chain, loudspeaker and recording itself is what is to be judged
 
...The few people on this forum who did double-blind amplifier tests found that there are only two things that matter, besides accurate gain matching:

Frequency response with an actual loudspeaker load

Clipping behaviour, if the amplifier is driven into clipping.

What about DC servos that smear LF transient response (not all of them do however)? Of course if the locals are doing their DBTs on ported speakers with flabby bass, no wonder if they wouldn't notice. IOW, if the system is designed for FR only, then not surprising if only FR matters.
 
Last edited:
Don't they? Any continuous-time high-pass filter with a low cut-off frequency has an impulse response with long time constants, so I would say they all smear LF transient response to some extent, no matter whether they are DC servos or AC couplings with or without bootstrapping.

As it's a Iinear effect that changes the transfer function from input to output, I would count that as part of the frequency response, but I don't know how PMA sees that ( https://www.diyaudio.com/community/...overall-amplifier-quality.407222/post-7552542 ).

Anyway, it's off topic in this thread.
 
The question is when does it become audible? How many people use sealed speakers and or those that have equivalent time-domain response, then disable the servo and listen, then adjust the servo until LF smear in real music becomes inaudible? From looking at some amp schematics, people often copy what everybody else is doing, which may not be optimal.

Its traded off against the amp settling at turn on too. Most people won't use a relay to speed up settling at turn on so that in normal operation the servo can be slow enough not audibly affect LF music transients.

For anyone interested, a little historical note about time-domain speaker response versus simple FR is attached.


Regarding on or off topic, in my book any audible distortion is a distortion. If the distortion is LF transient smear and its audible, then how can it not be a distortion? Just because its a Linear Distortion, it gets a free pass? Anyway FR was already mentioned by you, so you opened the door to phase which is a parameter in the frequency domain (so to speak).
 

Attachments

  • NS-10M Measurments and History.pdf
    2.6 MB · Views: 38
Last edited:
  • Like
Reactions: 1 user
Well, these two sentences combined forms a question and an answer:

"The distortion measured in commercial amplifiers is roughly an order of magnitude higher than the simulated distortion numbers seen on this board.
What do you think? Are we living in a bubble?"

Because anyone who does not understand the difference between these three concepts, the distances between which are huge, has certainly placed himself in a bubble:
  • a (very conditionally professional and good) simulation in LTSpice;
  • a single amateur assembled prototype;
  • a product with medium or large production volume, which must have repeatable declared parameters, be reliable and make a profit (at any in the predefined range selling price of the product).

This is so obvious that it's even boring.
But even in a bubble some fun can be created cause there is a method of fiercely fighting by correspondence or categorical statements “in my model the nonlinear distortion coefficient is 0.00000000000000000001%, so this is the best sounding amplifier in the world :)
None of this is scary but is even funny; after all, it’s important that people don’t get bored.
 
I think that the simulator can be quite accurate if the amplifier was designed to be largely insensitive to variations in device parameters and the layout was good. That implies relying more on local and global feedback and less on device matching. A class AB design has to be very careful with the inherent non-linearities.
Ed
 
In response to #46 and #74, this may be helpful. A significant difference between simulation and actual circuits is the layout (both point-to-point and PCB). Adding a resistor between every connection (okay, some of the more critical higher current connections) and inserting some approximate realistic resistance might help the simulation.

Douglas Self (and surely others, I haven't read any of the esteemed power amplifier books, though perhaps I should) has written a lot about this. Below is a web page that's been online a while, I think I first saw it in the late 1990s (see especially the parts marked Topological). Seeing where the currents go and that everything is a non-zero resistance, and so with higher current (especially class B or AB, where each output transistor carries a highly distorted part of the waveform) may have a significant voltage across it is clearly important.
http://douglas-self.com/ampins/dipa/dipa.htm
 
I think that the simulator can be quite accurate if the amplifier was designed to be largely insensitive to variations in device parameters and the layout was good. That implies relying more on local and global feedback and less on device matching. A class AB design has to be very careful with the inherent non-linearities.
Ed

Simulators have the accuracy of the transistors modelisation, if transistors parameters are accurately modeled then the simulator
will yield a very accurate computation, after all circuitries with much more transistors than amplifiers are correctly simulated,
think about a mixed signals IC.

Now what is not computed in most if not all amplifiers is the practical circuit layout parameters extraction, for this part
everybody rely on a collection of insights and rules that are supposed to help implementing an optimal EMC.
 
Part of the difference between simulation and test results at 20 kHz could be because the cross over distortion is typicaly larger where reverse bias and switching time have more effect than at 1 kHz. In a Class B amplifier half the output stage is reverse biased but the standard transistor Spice model don't seem to consider reverse biased operation. Some Spice transistors are modelled as subcircuits that have a diode model and a transistor model to include reverse biased operation though.
Also there are some power transistor switching charge storage charge effects that I am not sure can easily be modelled.
The spice models are just empirical that approximately follow what real components do. So expecting accuracy is optimistic particularly with unverified data.
 
Simulators have the accuracy of the transistors modelisation, if transistors parameters are accurately modeled then the simulator
will yield a very accurate computation, after all circuitries with much more transistors than amplifiers are correctly simulated,
think about a mixed signals IC.

Now what is not computed in most if not all amplifiers is the practical circuit layout parameters extraction, for this part
everybody rely on a collection of insights and rules that are supposed to help implementing an optimal EMC.

That's why for ICs, IC designers do extracted-view simulations. The on-chip wiring is then extracted from the layout and modelled as a huge RC network. Inductances are usually neglected for on-chip connections, which sometimes leads to nasty surprises. IC designers also do Monte Carlo simulations to see the effect of mismatch.
 
  • Like
Reactions: 1 user
No, inductances are not neglected for on-chip simulation. The whole layout is extracted as a big RLC (yes, there is an L there) network, and the transistors embedded. The EM simulators cope with inductance as easy as capacitance. At audio (which needs VHF/UHF modeling to be able to resolve better than .001% distortion due to EMC) a simplified formula can be used and it runs fast. Audio itself may not NEED better than .001% distortion but there are applications for those op amps which require better fidelity. Multilayer digital PCBs need to be run the same way. Start looking at the MMW bands for 5G beam forming and you need a simulator that is EXPENSIVE and takes a day or more to run the passive model, and another day or two to run a 5G signal through the whole works. At that point, just transmission line models and extracted mutual inductances are so far off the receiver might not even be on frequency, let alone accuracy needed for phase and amplitude control.