Probably a trivial question, but it seems odd to me. In datasheets and other sources, it states that line levels are either 2Vpp or 2.5Vpp for consumer, and 4.5Vpp for professional line levels. The standards state consumer levels as 894mVpp (-7.78dBu) and professional as 3.472Vpp (+4dbu). There's also a "standard" called "semi-pro" that seems to be anything in between.
Maybe those non-standard standards where devised to prevent clipping and where intended for inputs only? If so, using those levels as outputs, as they often are, defeats that purpose.
Other than the ARD standard (4.384Vpp/6dBu) which appears to be strictly adhered to, there are some insane nameless standards for some power amplifier inputs, ranging from 5.515Vpp (8dBu) to 34.732Vpp (24dBu)!
Can anyone enlighten me?
Maybe those non-standard standards where devised to prevent clipping and where intended for inputs only? If so, using those levels as outputs, as they often are, defeats that purpose.
Other than the ARD standard (4.384Vpp/6dBu) which appears to be strictly adhered to, there are some insane nameless standards for some power amplifier inputs, ranging from 5.515Vpp (8dBu) to 34.732Vpp (24dBu)!
Can anyone enlighten me?
In what datasheets, and in what standards? I'm aware of 0dbM and +4dBu, but then I'm stuck in the 1960s.
Yes. Cite where you are seeing these numbers. They are strange to me. Also peak-to-peak is not historically common in audio (even though it may relate to clipping).
Ya, same era here. I started learning from my father's course books when I was about ten.
You'll find those levels in just about any audio part datasheet anyway. I just looked for one and found example circuits for the PCM1794A. There's two examples of output circuits for the device, one for 2V, the other for 4.5V, same circuit, different gain. In that datasheet it says RMS, which is also wrong, in RMS it would be 316mVRMS/1.228VRMS, some datasheets have the same voltages and say Vpp. I always ignored the sheets and used the standard specifications and the output levels where fine, but this was something that started to bother me enough to ask and google wasn't helpful.
You'll find those levels in just about any audio part datasheet anyway. I just looked for one and found example circuits for the PCM1794A. There's two examples of output circuits for the device, one for 2V, the other for 4.5V, same circuit, different gain. In that datasheet it says RMS, which is also wrong, in RMS it would be 316mVRMS/1.228VRMS, some datasheets have the same voltages and say Vpp. I always ignored the sheets and used the standard specifications and the output levels where fine, but this was something that started to bother me enough to ask and google wasn't helpful.
2 V RMS, so 4 sqrt(2) V peak-peak ~= 5.657 V peak-peak, is the most usual level for a consumer CD player playing a full-scale sine wave. No idea why.
This is a maximum rather than a nominal level, though. (Or to be precise, it would be the maximum level if there were no intersample overshoots.) CDs were never intended to be recorded so damn loud as they are now, so I guess the intended nominal level was substantially lower.
This is a maximum rather than a nominal level, though. (Or to be precise, it would be the maximum level if there were no intersample overshoots.) CDs were never intended to be recorded so damn loud as they are now, so I guess the intended nominal level was substantially lower.
'Why' is because that's what it says in the Phillips Red Book CD standard. Chosen to further improve SNR no doubt, but then all those players with attenuator pots suggest a spot of bother somewhere.
Nominal CD level is 300mV in the circles I move in.
Nominal CD level is 300mV in the circles I move in.
Line level standards.
0dBm was a common standard power used in telephone companies where 600 ohm circuits were used. So, when the VU meter was developed in 1939, the meter was calibrated to read 0 at 1mW into 600 ohms. But, since the meter also had to bridge a 600 ohm circuit without loading it, and the internal impedance was 3900 ohms, they added an external 3600 ohm resistor to get a total impedance of 7500 ohms. That also means that to get a VU meter to indicate "0 VU" you had to apply a signal at +4dBm to the circuit. That became a standard audio level.
+4dBu came from the days when audio in studios and networks used a "power distribution" method, where the source impedance of an output was 600 ohms, and the terminating load was also 600 ohms, and would have been known as +4dBm (reference being 0=1mW). Maximum power transfer occurs when source and load are matched. That had its roots in eaarly telephone networks where a twisted pair of 16ga wire had a characteristic impedance of 600 ohms, which was a factor for very long lines. As wire got smaller, that changed of course, but the standard hung around. The reference level, 0dBm, was one milliwatt into 600 ohms, the voltage to produce that power would be .7746 volts. +4dBm then would be a voltage of 1.228 volts.
Now we don't us power distribution anymore. Instead of a source Z of 600 ohms and a terminating Z of 600 ohms, we now use a "low" source impedance and a high input impedance. In large facilities this change actually resulted in a savings of power and heat load, and made connecting a single output to multiple inputs much simpler. But then, the 1mW reference no longer had meaning, and so neither did +4dBm. So it was changed to +4dBu to represent an equivalent voltage, but without the actual power transferred.
For adequate headroom when operating at +4dBu you have to add about 8dB for peaks that a VU meter doesn't show, then add another 9 or 10 dB at least for adequate headroom without clipping. A lot of pro gear can do peaks at +24dBu. But that takes power supplies that allowed for a more than 30V output swing, and that's expensive. So consumer gear was operated at a lower reference level of -10dBV, so even allowing for 15dB of headroom you would need an output swing of only 5V. It was about economy.
0dBm was a common standard power used in telephone companies where 600 ohm circuits were used. So, when the VU meter was developed in 1939, the meter was calibrated to read 0 at 1mW into 600 ohms. But, since the meter also had to bridge a 600 ohm circuit without loading it, and the internal impedance was 3900 ohms, they added an external 3600 ohm resistor to get a total impedance of 7500 ohms. That also means that to get a VU meter to indicate "0 VU" you had to apply a signal at +4dBm to the circuit. That became a standard audio level.
+4dBu came from the days when audio in studios and networks used a "power distribution" method, where the source impedance of an output was 600 ohms, and the terminating load was also 600 ohms, and would have been known as +4dBm (reference being 0=1mW). Maximum power transfer occurs when source and load are matched. That had its roots in eaarly telephone networks where a twisted pair of 16ga wire had a characteristic impedance of 600 ohms, which was a factor for very long lines. As wire got smaller, that changed of course, but the standard hung around. The reference level, 0dBm, was one milliwatt into 600 ohms, the voltage to produce that power would be .7746 volts. +4dBm then would be a voltage of 1.228 volts.
Now we don't us power distribution anymore. Instead of a source Z of 600 ohms and a terminating Z of 600 ohms, we now use a "low" source impedance and a high input impedance. In large facilities this change actually resulted in a savings of power and heat load, and made connecting a single output to multiple inputs much simpler. But then, the 1mW reference no longer had meaning, and so neither did +4dBm. So it was changed to +4dBu to represent an equivalent voltage, but without the actual power transferred.
For adequate headroom when operating at +4dBu you have to add about 8dB for peaks that a VU meter doesn't show, then add another 9 or 10 dB at least for adequate headroom without clipping. A lot of pro gear can do peaks at +24dBu. But that takes power supplies that allowed for a more than 30V output swing, and that's expensive. So consumer gear was operated at a lower reference level of -10dBV, so even allowing for 15dB of headroom you would need an output swing of only 5V. It was about economy.
No, Red Book describes the medium and data format.Is that 300 mV also from the Red Book?
Well maybe it doesn't but something does. 2V is the standard CD player output. Don't ask me where it says that but it does.
Note that the characteristic impedance varies with frequency at telephony/audio frequencies - at RF a twisted pair like this is about 100 to 120 ohms and is independent of frequency.That had its roots in eaarly telephone networks where a twisted pair of 16ga wire had a characteristic impedance of 600 ohms
A 1kHz signal has a wavelength of 300km, and 300km of phone cable has a lot more resistance than 600 ohms - basically its audio characteristic impedance is dominated by resistance and other losses, not the conductor geometry (which determines RF behaviour).
Of course, your statement is exactly true. Do you think that ignoring line characteristic impedance would have a better result? I'm pretty sure Telco of the 1930s wasn't concerned with line impedance at RF.Note that the characteristic impedance varies with frequency at telephony/audio frequencies - at RF a twisted pair like this is about 100 to 120 ohms and is independent of frequency.
I haven't fact checked this, but assuming it's true, what is your point? Telco was 'doing it wrong' all along? They should have been worried about RF behavior? Where did you find the specifications for the 16ga twisted pair that Telco used in the 1930s and 1940s?A 1kHz signal has a wavelength of 300km, and 300km of phone cable has a lot more resistance than 600 ohms - basically its audio characteristic impedance is dominated by resistance and other losses, not the conductor geometry (which determines RF behaviour).
For a cable to be considered a transmission line it must be longer that 1/4 wave. The 1/4 wave length at 15KHz in 66% velocity factor wire 33Km/20.5 miles. 5KHz is almost 100Km/62 miles.
Also, are you aware that broadcast networks used equalized lines with response flat to 5KHz or 15KHz depending on application. The old "Broadcast Services" division provided the experts to set up the equalizers and amplifiers. Broadcast networks employed thousands of 5KHz equalized lines, all across the country.
What I remember from old-fashioned telephone hybrids is that the termination impedance is not just 600 ohm resistive, but some RC network. That's probably because the line impedances are not at their high-frequency asymptote yet at telephone frequencies. I don't know if these lines had Pupin coils - probably not, as those are supposed to make the characteristic impedance resistive at low frequencies.
Anyway, quite off topic.
Anyway, quite off topic.
Last edited:
You are applying the radio frequency theory of the word "transmission line" to telephony of the 1930's. Telephone company interest in radio then was how much money they could make from special services to the radio stations. The phrase "broadcast transmission line" may have appeared in telephone company tariff lists or standard contracts with radio stations for special lines.For a cable to be considered a transmission line it must be longer that 1/4 wave. The 1/4 wave length at 15KHz in 66% velocity factor wire 33Km/20.5 miles. 5KHz is almost 100Km/62 miles.
Also, are you aware that broadcast networks used equalized lines with response flat to 5KHz or 15KHz depending on application. The old "Broadcast Services" division provided the experts to set up the equalizers and amplifiers. Broadcast networks employed thousands of 5KHz equalized lines, all across the country.
I guess I opened a can of worms. The telephone 600Ω standard (or 400Ω to 800Ω depending on the system, and a wide voltage range), transferred to microphones, or was it the other way around? so I should have considered that relationship, but there are so many impedances for microphones now that I guess I discarded the relationship.
I tend to use a higher input impedance for equipment (unless it's for my own use) despite the extra noise (negative impedance converter or not), and the rest of the circuit at 110Ω (convenient if I need to use a balanced AES3 standard), I often make the output impedance higher (1K-20K) if the application is indeterminate and up to 1MΩ (or even 10MΩ!) for old-time music equipment compatibility. If the situation calls for it, and the impedance difference isn't too much (or there's the specter of distortion looming), hand-wound bifilar wound inductors work nicely in place of resistors and damn Johnson and his noise, it's easier to correct phase (if needed) than remove noise.
The early days (and to this day) were a big problem due to competition between companies, instead of sitting down and setting a standard, they went in to production, and that mess just cascaded and branched in to the future I guess. Maybe I should design some generic drop-in circuits to test the inputs and outputs and then auto-configure my circuits according to the test results, it's extra PCB real-estate and cost, but maybe worth it. Maybe I could do something clever with VCAs and TIAs. For now it's probably best to keep the levels higher and continue using input and output adjustable level attenuators or variable gain op-amps when/where needed, but not on my personal equipment!
Thanks for all the feedback, it'll keep me busy contemplating for a while!
I tend to use a higher input impedance for equipment (unless it's for my own use) despite the extra noise (negative impedance converter or not), and the rest of the circuit at 110Ω (convenient if I need to use a balanced AES3 standard), I often make the output impedance higher (1K-20K) if the application is indeterminate and up to 1MΩ (or even 10MΩ!) for old-time music equipment compatibility. If the situation calls for it, and the impedance difference isn't too much (or there's the specter of distortion looming), hand-wound bifilar wound inductors work nicely in place of resistors and damn Johnson and his noise, it's easier to correct phase (if needed) than remove noise.
The early days (and to this day) were a big problem due to competition between companies, instead of sitting down and setting a standard, they went in to production, and that mess just cascaded and branched in to the future I guess. Maybe I should design some generic drop-in circuits to test the inputs and outputs and then auto-configure my circuits according to the test results, it's extra PCB real-estate and cost, but maybe worth it. Maybe I could do something clever with VCAs and TIAs. For now it's probably best to keep the levels higher and continue using input and output adjustable level attenuators or variable gain op-amps when/where needed, but not on my personal equipment!
Thanks for all the feedback, it'll keep me busy contemplating for a while!
"Why" may be because it fits in a +/-5V (or +12V) power system. And historically this was huge enough to travel inside a noisy PC on simple shielded line. And yes, this is the nominal RMS of a 2.8V peak DAC.2 V RMS, so 4 sqrt(2) V peak-peak ~= 5.657 V peak-peak, is the most usual level for a consumer CD player playing a full-scale sine wave.
TI's use of 4.5V level looks like a way to find another 2dB dynamic range.
Do you have ANY other samples with other than 2V RMS (5.6p-p)?
That "300mV" is speech/music nominal, NOT test-bench. Assumes 16.5dB headroom. VERY conservative. Old-fashioned.
Telephone lines AND their transmitters and receivers run 150 to over 900 Ohms. The exact impedance is not terribly critical, if you remember your "matching" theory. Suburban systems were typically "mis"-matched so the match improved on the furthest stations.
Radio networks used voltmeters so had to have a specific impedance to infer "power". Before 1939, reading live speech was folly: the meters were too insensitive. There existed a "Navy meter" with 500 Ohms marked right on the face. Older broadcast gear was nominal 125/500r. Weston and CBS,BTL,NBC worked with the new Alnico and improved copper rectifiers to make a meter with consistent response on live speech/music so (aside from setup on tone) the techs could quick-check the line gain/loss by reporting the peaks "-3, -4, +1...") on the sending end and trimming for similar level at the receiving end. Somehow the 500r custom got lost here, and the 150/600 nominal appears.
Long-lines was very good at manipulating line impedance for an EQ-able loss and no great change as the network changed (different newsrooms and ballrooms). Nobody alive can do what those guys did in their sleep.
The nominal "max" level was +18dBm without gross distortion (couple % THD). But there were no good affordable peak meters. On running program not to be repeated, 10dB headroom didn't sound dirty, so +8dBm was system maximum. For re-recording to film or tape, repeated "slightly distorted" becomes annoying so +4dBm is nominal meter level. With modern notions of <0.1% clipping this needs a +20dBm line amp. This "fits" in a +/-15V opamp, just barely, no margin for slop or surprise or build-outs. So most +/-15V consoles use bridged pairs; also to get nominal "balanced" mode.
Post #6 does not mention "full-scale".Then why does it specify the full-scale level according to post #6?
Standard output of what level in dBFS?Well maybe it doesn't but something does. 2V is the standard CD player output. Don't ask me where it says that but it does.
No, but post #6 is a reply to post #5 about the full-scale level of CD players.Post #6 does not mention "full-scale".
- Home
- Source & Line
- Analog Line Level
- 2V, 2.5V, and 4.5V Line Levels? What are they?