What do you think makes NOS sound different?

I have now included the link to the 88.2/24 file. It would be very interesting to hear from you whether this can be regarded as an improvement on the original 44.1/16 files.
Dropbox - NOS1_88.2 - Simplify your life

Hans
.

ok, which were the original ones? make my life easy :p

response will be easy this time, just if and what we like dislike on the 88.2 right?
 
I answered which file sounded the most pleasant/better.

/

That’s o.k., it was a tranparancy test and not a better or worse test which was performed in the unbiased way it should have been done by all contenders.
When assuming that processing would be fully transparrant, no difference would have been noticeable.
The fact that differences were supposed to be heard, would mean that transparancy could be questionable.
However with almost 50%, we can’t conclude that transparancy and thus processing the sound is a cause of different sound between OS and NOS.

The 88.2/24 is indeed meant as a better or worse test.
I’m looking forward to hear what is perceived.

Hans
 

TNT

Member
Joined 2003
Paid Member
I'm not sure I understand the difference... :)

So here there is no right ow wrong - a filter might have a positiv impact on the problems for one recording and the other way around for an other (1) !?

As we dont know the original we could not say anything about transparency - right - if we agree on the statement (1) above..

//
 
In the first round I had done the conversion with audacity before Hans provides the files for the blind test, so I had "trained" myself for what I was after. That was a difference in the dynamics -or transient, call it as you like- mostly perceived in the bass. The processed files had more attack -authority?- but it wasn't integrate with the whole, like as if it was mastered wrong. In the second round this difference was still present to help me tell the processed files but now the sound was not "broken" as before. Only thing that didn't sound OK was the sound reflections from the recording ambience. Indeed, I missed the least airy track!
 
Last edited:
Thanks Hans,

How do we interpret the table? I only told you what I liked best and now I see results like wrong and right? So does that mean, when I liked an unprocessed track it is green? and when I liked a processed track it is yellow?

and what about the others, did they claim what was best or did they try to guess what was processed? So when someone called a track as processed it was green but we do not know if they liked it or not?


no problem, but just want to make sure - arghh :D we should have agreed upfront on response options

Because there is no right or wrong for me. If this software does something with the track that makes it sound better, I would like to experiment more with this.

last time it was relatively easy and now it was all much more close


Yes, sorry, my experiment instructions could have been more clear.

While, as Hans says, the experiment is a test for subjective conversion transparency, selecting which track you liked better would give the same result as a transparency test. If you consistently preferred one track over it's twin, then the conversion was not completely transparent to you. That result is then included in the averaged results of the group, where it contributes to building some level of statistical significance.

Before we discuss the group results, we must keep in mind that our group sample size (the number of us reporting test results) is mathematically too small to place confident faith in. That said, our results are still interesting. As Hans points out, after extracting the anomalous 'Day-O' files, the group identified/preferred the original source tracks to the resampled tracks by 53% to 47%. Which is statistically insignificant, again, keeping in mind the small sample size of our group. This indicates that the resampling was statistically transparent for the group. Meaning, there was essentially a random preference between the file-pairs. A much greater test sample size may, or may not reveal a differing result.

About the 'Day-O' files. It's possible that the entire group identified/preferred the original source file by by pure chance, but that is very unlikely. Speaking only for myself, I felt that the Day-O files were the easiest of the four pairs to hear a difference between, and that was via a cheap Pioneer OS CD-Player. This is, indeed, puzzling.


Regarding, the 88.2KHz up-sampled files experiment:
Here, you will, indeed, be listening for your subjective preference. For whether or not, the 88.2 files sound better to you than their 44.1 counterpart files do. Remember, that the ultimate goal of our thread is to identify how to obtain digital playback that sounds better than either plain NOS, or typical OS do. Potentially, combing the desirable subjective aspects of both.
 
Last edited:
May I ask, as this test is about streaming, this would mean that everyone may have different Usb input boards, this by itself will also affect the perceive sound no ???? What about the dac used as its not the same design.

Sumotan, It's not necessary that we all have the same listening equipment. All that matters is what you hear via your own system. This is especially true for a differential subjective transparency test, as you are only listening for whether a pair of files sound identical to each other, or not on that system to your ears. For example, many audiophiles hear the difference of NOS sound, even though they don't share the exact same equipment - or ears.

For the 88.2KHz experiment, while you will indeed be listening for which file sounds better to you, your results will still be a valid discovery for you, regardless of your system. In other words, if your individual results show that you have discovered a better sounding playback method, with your system and your ears, then that discovery is valid and highly valuable to you. Regardless, of whatever the group's average results may show. :)
 
As Hans points out, after extracting the anomalous 'Day-O' files, the group identified/preferred the original source tracks to the resampled tracks by 53% to 47%. Which is statistically insignificant, again, keeping in mind the small sample size of our group. This indicates that the resampling was statistically transparent for the group.

Or possibly it indicates that the resampling was not reliably detectable for that particular group of files, which were subject to an unknown amount of digital processing (recording, mixing, equalization, sample rate conversion) prior to being used for this experiment, and that the resampling was likely readily detectable for a file which has only been subject to a minimal amount of digital processing. So possibly the detectibility of the resampling depends on the history of the file being resampled.
 
If it helps, I can WeTransfer some recordings of a Dutch amateur light music choir made with two AKG C900 condenser microphones in an ORTF set-up, a home-made microphone preamplifier and a Fostex FR2-LE field memory recorder. They sing off key sometimes and due to the use of far-miking with no noise gate or downward expander, there is a bit of noise in the recording.
 
Hans, Yes, that's true, we have no control over that.

The point I was trying to make is that the Belafonte track indicates that the resampling process is, in fact, audible and that the audibility of that process may be masked by factors outside of our control.

Marcus, that's an interesting possibility. While the Day-O source file was digitized by Hans directly from an vinyl LP (as I understand it), I'm assuming that it was, at least, subject to an FIR anti-alias filter, but nothing more. So, perhaps, the additional FIR filtering/EQ associated with mixing and the final format sample-rate release of a commercial digital music track, introduces audible artifacts which are masking the PGGB resampling process.

A good test might be for Hans to digitize a few more songs directly from vinyl LP, exactly the same way that he did for the 'Day-O' track. Then we conduct a fresh listening test, only with the new song files, to see if the same high selection correlation persists. If similar results are obtained, it would be quite disturbing, because, as Hans points out, we are pretty much stuck with the source tracks which the commercial music industry produces.

There is, however, one exception of which I'm aware. An 'apodizing' filter removes the behavior of all prior FIR band-limiting filters, by sacrificing a small bit of the very top of the passband, and replaces those prior filter's behavior with it's own filtering behavior. Bruno Putzeys, has likened apodizing to; a means of salvaging a burned steak by cutting-off the blackened bits. I note, that the PGGB 'remastro' resampling software has a selectable apodizing mode, which was not utilized for our existing resampled test files.
 
Last edited: