Normalization can turned off-on in settings of your Tidal account. I was wondering if something is lost when using this feature. According to SageAudio, Tidal does not boost the sound, it only reduces it if too loud.
Tidal normalizes audio to an integrated -14 LUFS but can be set to quieter -18 LUFS settings by listeners. At the time of writing this, Tidal does not turn up tracks on their streaming service - it only turns them down, meaning some tracks may sound quieter than others.
I use the Tidal web player on Linux. Sometimes I notice distortion which is more likely with vocals. Perhaps the music producers are not doing enough to output quality for music streams. SageAudio which does mastering explains that recordings may not work the same on different streaming hosts.
https://www.sageaudio.com/articles/...al use Loudness,may sound quieter than others.
Tidal normalizes audio to an integrated -14 LUFS but can be set to quieter -18 LUFS settings by listeners. At the time of writing this, Tidal does not turn up tracks on their streaming service - it only turns them down, meaning some tracks may sound quieter than others.
I use the Tidal web player on Linux. Sometimes I notice distortion which is more likely with vocals. Perhaps the music producers are not doing enough to output quality for music streams. SageAudio which does mastering explains that recordings may not work the same on different streaming hosts.
https://www.sageaudio.com/articles/...al use Loudness,may sound quieter than others.
It should only change the relative volume of each track, and thus have no effect on sound quality.
May be of interestDoes Loudness Normalization reduce sound quality?
https://dynamicrangeday.co.uk/about/
https://dynamicrangeday.co.uk/loudness-war-dirty-secret/
Well, there are seven stages in the traditional route of getting music to the listener:
Before streaming became a thing, people would rip their CDs and make playlists without concern (AFAIK) for volume change between tracks. Why is that a problem now? I believe that streaming hosts should not modify the music sent to clients.
- Songwriter
- Performer
- Recording engineer
- Mix engineer
- Mastering engineer
- Management
- Label
Before streaming became a thing, people would rip their CDs and make playlists without concern (AFAIK) for volume change between tracks. Why is that a problem now? I believe that streaming hosts should not modify the music sent to clients.
Normalisation has become more common because many people listen to sequences of tracks from multiple albums (and even sources), which are therefore mastered at different apparent loudnesses to each other. Sometimes very different. If you are settling down to listen to a full album then it is less important, because the loudness should already be normalised across its tracks by the sound engineer; you only need to set your playback volume once.
Even a relatively small reduction in volume can be perceived negatively; people will tend to prefer a given track before it was reduced/normalised in loudness, with no other change; within the normal/reasonable range, louder tends to seem clearer. So how much it really reduces sound quality is a separate matter; in the digital domain with a decent bit-depth the effect could be minor if all tracks are reasonably similar. However it can more affect recordings with a high dynamic range, especially if the algorithm used is less sophisticated. So normalising (say) thrash metal tracks along with classical tracks could adversely affect the latter.
Also... you may not actually 'want' it normalised; consider how one might actively choose to select different playback levels for different tracks or styles. In my case, a classical track with lots of quiet reflective content might be preferred louder - even though sudden brief fanfares (etc) would really be too loud they are short enough to be fine. Whereas a track with sustained high output (especially if it has already been a victim of excessive loudness compression) could be extremely wearing at the same level. It is hard for generic algorithms, applied automatically by a third party, to know or follow your preferences, but most systems I've used would make the quiet relective parts much harder to hear than I would wish.
I do use normalisation when playing random tracks or mixed playlists, especially in the background; IMO it is really quite helpful and so has a place. But for serious listening I choose either to adjust each track manually as wished (a remote volume control is useful) or to listen to full albums/concerts and set the level once.
Even a relatively small reduction in volume can be perceived negatively; people will tend to prefer a given track before it was reduced/normalised in loudness, with no other change; within the normal/reasonable range, louder tends to seem clearer. So how much it really reduces sound quality is a separate matter; in the digital domain with a decent bit-depth the effect could be minor if all tracks are reasonably similar. However it can more affect recordings with a high dynamic range, especially if the algorithm used is less sophisticated. So normalising (say) thrash metal tracks along with classical tracks could adversely affect the latter.
Also... you may not actually 'want' it normalised; consider how one might actively choose to select different playback levels for different tracks or styles. In my case, a classical track with lots of quiet reflective content might be preferred louder - even though sudden brief fanfares (etc) would really be too loud they are short enough to be fine. Whereas a track with sustained high output (especially if it has already been a victim of excessive loudness compression) could be extremely wearing at the same level. It is hard for generic algorithms, applied automatically by a third party, to know or follow your preferences, but most systems I've used would make the quiet relective parts much harder to hear than I would wish.
I do use normalisation when playing random tracks or mixed playlists, especially in the background; IMO it is really quite helpful and so has a place. But for serious listening I choose either to adjust each track manually as wished (a remote volume control is useful) or to listen to full albums/concerts and set the level once.
Yeah it's likely just volume difference related, people always prefer the louder one (everythig else staying being the same).
If I assume the streaming service in question works as it should and doesn't significantly affect sound in anyway. If vocals distort, it's likely distortion of your speakers if you have small fullrange or two way system. Everything else in a modern playback chain have lessor effect, amps, dacs, streaming services all are quite good and should not add such effect unless your gain staging is poor, input of some device distorts. The recordings might have distortion added purposely or accidentally, relatively common thing.
If I assume the streaming service in question works as it should and doesn't significantly affect sound in anyway. If vocals distort, it's likely distortion of your speakers if you have small fullrange or two way system. Everything else in a modern playback chain have lessor effect, amps, dacs, streaming services all are quite good and should not add such effect unless your gain staging is poor, input of some device distorts. The recordings might have distortion added purposely or accidentally, relatively common thing.
Exactly.people always prefer the louder one
People prefer it because it seems to sound better even if it's absolutely not true (this also happens in listening tests of systems whose level differences must always be minimized).
In the recording industry, as often happens in other fields too, we are witnessing an escalation and artists are demanding increasingly higher volumes.
This is why volume normalization is becoming a widespread option in streaming platforms.
For the reason stated on the site I posted above that you cited (even if not clearly enough IMO).Why is that a problem now?
From here
"The Loudness War is a sonic "arms race" where every artist and label feel they need to crush their music up to the highest possible level, for fear of not being “competitive” – and in the process removing all the contrast, all the light, shade and depth – ruining the sound".
Of course there is no concrete reason to do this, it's just a kind of cold war where the contenders do what they fear others will do before them.
From here
- Research shows there is no connection between “loudness” and sales
- People don’t notice loudness when comparing songs
- Dynamic music sounds better on the radio - here's the proof
- Modern music playback methods makes loudness irrelevant
- Most listeners just turn loud music down !
However, since I don't trust those who say that it's just a volume reduction that doesn't affect the quality of the playback, while instead I trust your own report which seems to notice a worsening in the SQ, I wouldn't use it.
Just like I never used it before even on my bluetooth WAV files player in my car, for the same reason.
It has always been a problem. It might not be a big deal if all the music you listen to is of similar style and date, but it can be really bad with more disparate music. The aforementioned Loudness War has had a strong effect on this, with music from the 2000's onwards often being over 10dB louder than stuff from the 1980's. Maybe you're more tolerant than me, but I can't comfortably listen to music at both ends of that scale. In the days of physical media that meant twiddling the volume knob myself. Nowadays it can be automated with schemes like ReplayGain.Before streaming became a thing, people would rip their CDs and make playlists without concern (AFAIK) for volume change between tracks. Why is that a problem now?
You might not have heard it being talked about before because it wasn't practical to implement it before, as CDs etc. lack the required metadata, and the recording industry has no incentive to add it. Streaming services are different because they control all the media themselves. In the past, TV and radio would have done something simlar themselves, and simply not mentioned it.
There are unavaoidably multiple volume controls between the source material and your ears. Loudness normalization merely changes one of those to be slightly lower than it would otherwise have been. It's not really modifying the music any more than happens without it.I believe that streaming hosts should not modify the music sent to clients.
Please note that I'm not an expert, but it does seem that the information out there doesn't appear to be entirely consistent.
While on the AES website you can read that it does not affect the sound quality, on the other hand on the SageAudio website you can read that the track can be actively analyzed and that if the track itself were to have very high peaks the dynamics (and distorsion?) could be affected.
Loudness Normalization
Mastering for Streaming: Platform Loudness and Normalization Explained
So, maybe it can't be said that it is always totally irrelevant.
While on the AES website you can read that it does not affect the sound quality, on the other hand on the SageAudio website you can read that the track can be actively analyzed and that if the track itself were to have very high peaks the dynamics (and distorsion?) could be affected.
Loudness Normalization
Mastering for Streaming: Platform Loudness and Normalization Explained
So, maybe it can't be said that it is always totally irrelevant.
That would only happen if tracks below the reference level were increased in volume to match it, and had peaks that would clip with that much gain. However, as fubar3 originally quoted, "Tidal does not turn up tracks on their streaming service - it only turns them down", so dynamic range will remain unchanged.
Nothing against the above statement, but the fact remains that the OP reports some "defects" in the reproduction that he does not notice with the normalizer off.
Anyway, as said, I'll continue not to use it even on the system in my car. 😉
Anyway, as said, I'll continue not to use it even on the system in my car. 😉
However, as fubar3 originally quoted, "Tidal does not turn up tracks on their streaming service - it only turns them down", so dynamic range will remain unchanged.
Turning tracks down can cause some details being lost by noisefloor or lose details because of the reduction of bits used. So both increasing as well as reduction of the level can lose details. The same goes if for the level change which uses a dithering algorithm, which changes the information.
@ICG said: Turning tracks down can cause some details being lost by noisefloor or lose details because of the reduction of bits used. So both increasing as well as reduction of the level can lose details. The same goes if for the level change which uses a dithering algorithm, which changes the information.
---------
That is why the streaming host should not alter the music files received from the artist-producer team. Or maybe they should offer a way to stream the received version unaltered. This would make it easier to determine the root cause of bugs in the music. Are they caused by the artist, the streamer, or personal hi-fi gear.
---------
That is why the streaming host should not alter the music files received from the artist-producer team. Or maybe they should offer a way to stream the received version unaltered. This would make it easier to determine the root cause of bugs in the music. Are they caused by the artist, the streamer, or personal hi-fi gear.
Since in my opinion there is no way to be certain, you will practically never know.Are they caused by the artist, the streamer, or personal hi-fi gear.
My way of proceeding, for what it is worth, is to reduce to a minimum all possible sources of "error" on what I can have control over, that's in my chain.
On everything else I've no possibility of doing so, and then if in doubt I simply do not use things like a normalizer.
Whose usefulness I find to be rather relative, with a risk/usefulness ratio unfavorable. IMO
HmmTurning tracks down can cause some details being lost by noisefloor or lose details because of the reduction of bits used
I have to strongly disagree. I was delighted once -14 came in and Tidal Quiet -18 is even better for artists/mixers esp on audiophile systems. The hoisefloor of even CD = -96 from memory. Turning volume down and getting frying bacon was only a thing when 16bit dig recorders fist came in and if you didnt keep things a bit hot, then, esp without dither you might, might hear something.
Im going the route of Peter Gabriel in as much as providing 2 versions of the albums so ear jamming bud listeners can go for it at -10 master and the quality listener can actually get the artwork as intended...even if it means adding 10db of volume
We generally use at least 24 bit so volume is of such little consequence, it is not measurable to the human ear and I often use a 32bit recording rig for classical recording in which case a the nose breathing of the lead violinist is much louder lol
- Home
- General Interest
- Music
- Does Loudness Normalization reduce sound quality?