Flattest headphones for loudspeaker design

What surprised me was how similar speakers of the same size/style sounded. There was some difference in tonal balance, most notably the tweeter levels, but not anywhere as huge a difference as expected. ..... The recordings are very dry. What I could hear very little of was the sense of space.

Crutchfield Virtual Audio™ is a commendable attempt to at least try and guide a customer to the sound of a speaker in the age of internet commerce and COVID viral loads limiting audition possibilities. There is a notable lack of information on how Virtual Audio works online. This paucity of details or any apparent patent is likely significant.

Pano's findings on Virtual Audio™ pronounced limitations are explicable if
1. the system runs on Convolution only without any need for use of actual recordings for end use
2. the application probably applies Convolution to the mono speaker frequency and phase response in an anechoic chamber and headphone frequency response to any audio file including a user uploaded file
3. missing are:
- stereo imaging effect from two speakers
  • dispersion/reflection/room effects
  • actual recordings
4. Expected sound will mainly reveal tonal balance differences between speakers
5. Important Driver effects like horn honk may be obscured.
6. Taking the room out of the equation has clear advantages but with serious issues
 
Any thoughts on headphones for comparisons of loudspeaker tonality?
First you need to get the frequency response right. Doing this with computer audio EQ is easy. Without your in for pain but you can read about the Harmon curve and look at https://github.com/jaakkopasanen/AutoEq/blob/master/results/RANKING.md

Important aspects of timbre in headphones seem to occur beyond FR and be part of the driver and cup design e if you want to model ESL speakers then ESL headphones match.
 
the application probably applies Convolution to the mono speaker frequency and phase response in an anechoic chamber and headphone frequency response to any audio file including a user uploaded file
Yes, that seems to be what it is doing from what I heard and read. I don't know if the speaker response is a smoothed or adjusted one or the actual raw impulse. One would hope the headphone response is smoothed and averaged, as ears and heads and headphone placement vary so much.

As you say, I don't think this is actual recordings made of tracks being played back on the speakers, otherwise you wouldn't be able to use your own tracks.
In the past year I have done a number of recordings of speakers in room using in-ear mics. I've also done sweeps that give me an impulse response of the speaker and room (and my ears) that with convolution provides a reasonable facsimile of what the room and speakers actually sound like. That isn't what I heard on the test.
 
In the past year I have done a number of recordings of speakers in room using in-ear mics. I've also done sweeps that give me an impulse response of the speaker and room (and my ears) that with convolution provides a reasonable facsimile of what the room and speakers actually sound like. That isn't what I heard on the test.

Great work. I think your approach offers a much better way forward for online "virtual" auditioning. For punters wanting to know how a system sounds this approach may be ideal. To extract the speaker itself from the system that includes file/source/cables/VC/amp/speakers/room/mic would require many systems being uploaded for comparison.

If there was an industry standardized procedure punters would be on a winner. If each continent had a certified tester with a defined room and gear here would be world wide standardization to listen to the tested item in a controlled setup.

A youtuber did a sort of pilot trial below:

Binaural speaker capture concept:

Sapphire OB:

Box speaker:
 
Last edited: