The making of: The Two Towers (a 25 driver Full Range line array)

I too, started down the path of room correction, and it can be a rabbit hole!

I looked up DRC, but it was a bit much just to get my feet wet!
So, I went and got the 15-day demo or ARC2, which is basically AudysseyXT. It pushed my setup beyond the basic EQing I was already doing. Sound opened up and it was good!

So, I'm sold! I will add room corrections to my system. Didn't settle on which one yet, as ARC2 is a bit expensive.

Thanks for sharing the Voxengo plugin, I will take a look.
In mixing and mastering, I've used a Brainworx plugin and it sound really good. Lots of tweaking too, including EQ on M/S "channels"

Good luck on the ongoing task! :)

I'm sold too! But even before I started building the arrays I was sold. My profession for more that half of my career (that means 15+ years) is in IT as a system administrator and application manager. So I figured I could use that to improve sound as a whole. I know what is happening in the process, that helps to use these tools.
I've written all kinds of 3D Cad routines and programs in the past and that background helps to put these tools in perspective. My formal education is in Engineering. The IT part I learned hands on and due to interest.

If you want to take a shot at DRC it's not that hard to get started. Just use the DRCDesigner to start you off. You can even use measurements from REW by exporting the impulse and converting it to 32 bit float.
In another thread we spoke about getting together a tutorial for DRC. I still think that is a good idea. The main thing though is getting good measurements as that is the base of the signal manipulation. If you need help we're here ;).

I can feed back the convolved/eq-ed signal trough the REW measurement suite to back track the results. That really shows what is different in your system. For that I use my Asus Sonar Essence ST soundcard for recording and passing the signal trough to JRiver's Wasapi loopback to play it with EQ and Convolved to my Musical Fidelity M1 DAC.
Another option would have been to use the on board audio card and the Asus but I know the on board card on my particular PC's motherboard is rubbish.

In Audiolense I used Asio to see if it made a difference in measurements. I could not detect gross differences in using REW's Java and Asio in Audiolense.
 
Trouble is I am mostly setup with Macs, and I didn't find an app that uses convolved signals for Mac.

Running in emulation like Crossover or Wine seems a bit absurd since it probably would add too many variables to the process.

Right now, I use Audirvana+ with a VST plugin. Sounds really good.

I tried Amarra, JRiver for Mac and ...gasp... iTunes. Amarra is perfect for parties, heavy sound. Not for sitting and just listening. JRiver is the worst after iTunes. Lifeless music reproduction. Adding EQ justs clouds it more. Not impressed. Plus, they charge the same price as the PC version, but offering half of the functionality. Lame. Actually, I think iTunes sounds better than JRiver on a Mac.

Audirvana is the one for me. Super clean and best reproduction. Lively and detailed. Plays everything, from mp3 to DSD. Plus, I can use VSTs with it. I'll try that Voxengo soon.
 
Last edited:
I'll try to word it different. Yes, I recorded the sine sweep that I used for my second DRC attempt using a signal with EQ in that path. That brings down the midrange peak the un-EQ-ed line array has. The graph I showed is the line array without EQ. While I do need boost in the low's I practically need no boost in the high frequencies when dividing the low point and high point of the graph I showed. That came as a bit of a surprise really.
I tried recording with REW, which uses a short burst, and with Audiolense. With the latter you have the opportunity to use longer sweeps. I did use that feature but the control function that gives a basic analysis of the recorded signal wasn't passing the measurement as "good" until the level was way up again. So that confirmed to me my high ambient sound level is messing with my measurement. It's not a steady level, my house (with a big window) is only 3 meters away from a busy road.

While my first attempt at DRC sounded good using an un-EQ-ed sweep, even better than the same sweep with EQ applied to the same sweep by itself, my second attempt with an EQ-ed sweep as the base for DRC sounded much better, much more clear and natural. The naturalness improved with the Blumlein Shuffling applied to the signal. I'm not Shure about that last step (Blumlein Shuffling) being a gimmick or if it really is that much better :D.

Thanks, I think I've got it now. Yeah, so softer sweep volume (un-eq'd) = longer sweep length needed = longer time period needed between cars driving by. I wonder how well it would work if you applied an inverse eq to your eq'd sweep measurements and then use DRC exclusively with the amplitude correction cranked up.

When you speak of "naturalness" with the Blumlein stuff I take it you are describing the spatial qualities of the sound. My opinion (fwiw) is that stereo sound can be a bit of a compromise between timbre and perceived spatial realism. I do enjoy hearing a sense of space with some music but I think I prefer to err on the side of a stronger center image because I think the tone comes through a little better. If I'm not mistaken, "shuffling" is different than typical m-s decoding in that it can make spaced microphone recordings sound more like coincident ones, whereas m-s decoding can be used to adjust the apparent width of recordings made with coincident mics. I think I actually prefer the sound of spaced microphones personally.
 
Perceval,
You mention lifeless music reproduction in JRiver, own experience is such can happen at one hardware platform then on another hardware platform same program gets life and high performance. The audio buffer settings in the player or for the audio interface could be good tool to tune audio stream performance and if you say lifeless think you need to set buffer settings higher to get bite in the audio stream, try both higher and lower settings to get the feeling and using headphones as reference in this exercise could be preferred unless one happen to know speaker sound in room is good reference quality.
In most manuals the audio buffers use is explained to ovoid glitches but personal think their settings are very important for stream quality.
 
When you speak of "naturalness" with the Blumlein stuff I take it you are describing the spatial qualities of the sound. My opinion (fwiw) is that stereo sound can be a bit of a compromise between timbre and perceived spatial realism. I do enjoy hearing a sense of space with some music but I think I prefer to err on the side of a stronger center image because I think the tone comes through a little better. If I'm not mistaken, "shuffling" is different than typical m-s decoding in that it can make spaced microphone recordings sound more like coincident ones, whereas m-s decoding can be used to adjust the apparent width of recordings made with coincident mics. I think I actually prefer the sound of spaced microphones personally.

Just played a few tracks with and without the Blumlein like effect. No doubt the "original" sound is a bit more dry if you will. The imaging does improve with the "Blumlein like" effect on (it isn't pure Blumlein, I haven't found out what it does do exactly, it is based on the Blumlein Shuffle principle), as in pulling dry voices closer and reverbed voices further away. That is front to back. The width does not seem to be affected much, positioning wise, but adds the same front/back feeling. A deeper stage. Background choirs more from the side in some recordings. Way past the boundary of the speakers. As it seems louder with the effect turned on, that is a part that is hard to ignore in a direct comparison.
Day two of still enjoying the effect though. But if I had a better sound treatment maybe I would have the same apparent depth I'm hearing now. Part of the depth is/could be the other option in effects from JRiver, I have the choice of adding a virtual room with different settings. Usually I hate that kind of stuff.
Here's what I have set:
effects.jpg

Recording room with the lowest setting and Surround Field on Subtle.

As I said before, I do not perceive a wider stage. Imaging remains at the same positions. If I turn up the room settings to a higher level it just makes the sound more hollow (settings from 1 to 10). I've had the room switched off for a while but noticed it felt different. It seems like I need some kind of reference to a room that is now missing, making it more like headphone sound. I do have recordings that do excellent in depth and width without any effect though. But that is a very small number. Most recordings I listen to are studio production CD's. I'm not into the classical music, the only brushes I had with that kind of music was when guitar player Steve Vai was directing and playing with the Dutch "Noord Nederlands Orkest".
And of coarse some (old) movie scores with orchestra like the works from John Barry.
I'll ask on the JRiver forums for a description of what exactly is done to the signal. Usually I tend to avoid as much as possible. I kinda hate that I like it this much.

I fixed a small error in my convolution file yesterday, discovered it after looking at the REW files again of the sweep with both EQ and Convolution turned on. The left channel had a misalignment in it's impulse, leading to pre-ringing in the recorded impulse. As the right channel didn't have that I re-examined the measurements. REW had misaligned the peak value.

I still very much regard this as a learning phase (no pun intended). I have no doubt I can still get better results. The beauty of it is that I am surprised with what I get now, without turning my living room into a recording room.
In the evening things quiet down in the street and it's time to enjoy. But I have a little one (8 years) so no loud tests in the evenings.

A few days ago I read a nice rambling about stereo where the writer said:
There is another way of thinking about this: the loudspeakers serve as the first "early reflections" of a (phantom) sound source whose direct sound we didn't hear. Because our brain is good at filling in the missing blanks, it "infers" where that phantom source must be and THAT "inference" is what we actually perceive, or think we "hear."
I liked that view on things very much. Makes you look at things in a different way.
Here's the article: Moulton Laboratories :: The Brave New World: Loudspeakers to the Left of Us! Loudspeakers to the Right of Us!
 
Last edited:
Thanks for the link to the article. Very interesting and insightful indeed and I think the author may be onto something in regards to identical sounds from two locations being perceived as reflections of a phantom center source. I think this may explain why spaced microphone recordings sound so good even though they are "wrong" from a (misguided?) technical standpoint. I agree with the bullet points (DRC sure helps here) and would add to them the importance of a speaker with a rapid decay (very little energy storage). Anyway, this would be like the time-domain equivalent of what our brains do when presented with an overtone series of a sound whose fundamental is missing (like a low-pitched voice over a telephone). The more I think about the idea, the more I like it!
 
Perceval,
You mention lifeless music reproduction in JRiver, own experience is such can happen at one hardware platform then on another hardware platform same program gets life and high performance. The audio buffer settings in the player or for the audio interface could be good tool to tune audio stream performance and if you say lifeless think you need to set buffer settings higher to get bite in the audio stream, try both higher and lower settings to get the feeling and using headphones as reference in this exercise could be preferred unless one happen to know speaker sound in room is good reference quality.
In most manuals the audio buffers use is explained to ovoid glitches but personal think their settings are very important for stream quality.

In my case, hardware is similar, whether I'm on Windows or OS X. I am using a Presonus Audiobox 1818vsl for converting digital to analog audio. Drivers differs though. In this case, I think the Mac drivers are better and offer a cleaner sound than the Windows ones.

Audio buffers were set on the interface and didn't change when I tried the music apps mentioned previously.

The computer used has an i5 quad @ 3GHz, 16G RAM. Should be plenty fast for stereo or 5.1 tracks.
 
In my case, hardware is similar, whether I'm on Windows or OS X. I am using a Presonus Audiobox 1818vsl for converting digital to analog audio. Drivers differs though. In this case, I think the Mac drivers are better and offer a cleaner sound than the Windows ones.

Audio buffers were set on the interface and didn't change when I tried the music apps mentioned previously.

The computer used has an i5 quad @ 3GHz, 16G RAM. Should be plenty fast for stereo or 5.1 tracks.

Nice gear :) you have will expect to perform very well and not lifeless sound stream as you pointed out when combined to JRiver, there the tweaking from standard installation and settings could come in.
 
It should play well!

But since Audirvana plays very nice, and superb after a couple of clicks, supports plugins for about the same price, I would chose it over JRiver, which seems to need a lot of tweaking before it starts to sound good.

On the PC side, you get a video manager and playback, which is nice. On the Mac side, it's only a music player but sold at the same price as the full fledge PC version.

I like to pay for what is offered, and when I mentioned that on the JRiver forum, they told me I should have to pay them ahead, even for features that are not there so they feel encouraged to continue working on the Mac side, then expect new features to be added later... After paying more for the upgrade of course.... What?:confused: :eek:

Seems to be the trend these days. I am a teacher and see where things are going. When I was a student, my parents would offer me some kind of reward if I did a good job at something. Now, kids ask for their reward before doing the job, as an incentive....

Anyway, I'm way off here. Rant off! Ha ha!
Back to studying DRC.

Thanks for finding all those links W. I can't imagine I couldn't find anything by myself! So ashamed....:rolleyes:
 
I found the source of my noise, I need a better mic pre-amp. Managed to make some measurements while the ambient noise was way down. The noise floor in my measurements is still way to high, even if I don't have a signal to the speakers.
I did revise my basic EQ to something even simpler, a few broad cuts and a broad boost in the lows. Here's a picture of the filters:
basiceq.jpg


2 boosts and 2 cuts total for each channel to bring the measurement in shape for DRC. No boosts in the highs, as amazing that is to me, not needed(!).

A few days ago I dropped the virtual room I spoke of in my earlier posts. But only after these new measurements I'm really convinced I don't need it.
Still playing with the Blumlein Shuffling. The embedded one in JRiver is still very convincing although the differences are getting smaller.
It seems my previous EQ was adding more audible degradation than I suspected.

I now have width and depth in an almost untreated room. That will keep the girlfriend happy. The low spectrum still needs some work, that will have to wait till the new pre-amp arrives.

Getting closer and closer to a very convincing Acoustic landscape in front of me. Too bad the eyes play such a big part. I need to close them to be fully convinced. At night time it is easier (obviously). Kinda scary real sometimes :eek:. Seems like I've build a time machine (lol).

I'll wait with posting pretty graphs till I'm satisfied with my measurements.

Meanwhile I replaced the rings I used for my baffle bolts (I had cut some rings from the inner Tube ;)) with 2 O-rings each.
o-rings.jpg

It really cleaned up the impedance graph:
impedanceright.jpg

probably my OCD kicking in again ;)
 
Last edited: