VituixCad Simulations with Ideal Drivers

Take the second and the first. It kind of looks like vc has made the connection that the two separate baffle sims are part of the same overall speaker. I wouldn't have expected that and it looks to me that while you have clearly made the connection, you weren't initially expecting that either.

So where does that leave us? If you were to experiment with modifying the distance between the drivers via the Y axis in the crossover tool, does it actually change, or does it take this diffraction simulated response from the old distance and superimpose it with the new, creating a hybrid, or does it do something else?
Setting coordinates positions the response (set of responses) in relation to each other and the inspection point, listening point, for which you want to optimize the acoustic output. Inspection point, which all the graphs in main program six-pack show, is normal (the normal is the design axis) to the Y=0 X=0 point. Listening distance found from options is against Z=0. Speaker is virtual, drivers are virtual and the responses emanate from void :) There is no speaker in the main program, only acoustic response you have loaded to "the drivers" to represent you speaker's acoustic output. You could visualize the process like you were 3D scanning a statue. Head, torso and feet are scanned separately to get required resolution and then aligned/assembled back together using the coordinates within the software. Visually assembling stuff is easy thing to accomplish but here with loudspeaker and audio context we have invisible data and need to visualize the process mentally. This makes it harder to grasp than it actually is.

There are many ways to take advantage of, or exploit, the system as it is. In addition for ability to move the the responses up down, left right, back and forth you can rotate and tilt. The measured data contains accurate data points only with the x and y rotation axis (the exact measurements) and the rest is approximated from the x and y data sets, what is in between and outside on the hemisphere. This in mind, if you have very complex shaped speaker with drivers pointing into weird directions while still plan for sweet spot listening, one can measure the drivers so that the measured data points are done the listening position in mind giving most accurate results there and then use the coordinate system to position the measurements to "emanate" from a position like they do in reality, in context of the system, in relation to other transducers and design axis. If speaker and mic stay put for all the transducers, then the model is already in the data and no need to use coordinate system to reposition anything. The statue analogy would be here that looking the assembled statue from point that was not 3D scanned, but approximated from the data points nearby, the visual gets blurry (lack of detail) as you go out from the scanned data. Since we scanned with head, torso and feet separately (rotating y and x, like we do with the speaker measurements), there might be very accurate data points for some of these, but less accurate for the others, depending on where you are inspecting at. If inspecting at ear level, from the side, the head might be sharp, but the torso less and legs even less so because you are on area which has no real data for these, only approximated. If the whole statue was scanned once as whole then you might never get good visuals for the elbow for example, or ear, depending where the rotation axis was, nose or belly button. Think scanning only from single point, on axis like in the old days, you'd never see the elbows or ear or what is on the back, it would result 2D picture of the statue instead of 3D model.

All this enables flexibility to have accurate visualized data on the simulator of acoustic output of the speaker for the user to be able to make good decisions with crossover and harvest information overall, what the acoustic performance is for the construct you have measured. There is room for error acquiring data but for most cases following the manual is just the right procedure and should be pretty accurate, as accurate as possible for home and hobby situation. Minor errors don't seem to affect things too much.

That said, there is no point tweaking xo to death unless the data is accurate enough, e.g. measured response of the real deal with possible errors known. I know it takes some time to get confident with the measurement procedure, IR to FR conversion, merging near and far field responses and what not and there is still some error due to windowing. But, if one wants to take the hobby seriously then definitely this is the way to go, hands dirty, making mistakes and learning from them is sure way to progress.

Hopefully this helps anyone wondering what the heck is going on :) Some of the tools in VCAD are there to help with the design and build phase, which eventually lead to measurements which are the input for the main program, the bread and butter. The main program turns the measurements into visual representation of the acoustic output of the system in form of various graphs, inspected from (almost) any direction and distance you want. If one has loads of imagination visualizing the graphs back to "reality" one is able to tweak the physical construct as well and not just the crossover. This enables better looking graphs (measurements) next time, which means better acoustic output, which equals better system. Last problem is to connect the virtual visuals to actual perceived sound "quality", or lack of :) This is the hard part eventually, to get further and further. Very good systems can be done just by the visuals I think, comparing to the CTA 2034 standard for example, better than ever before with only on axis measurements or pencil and paper method.
 
Last edited:
Edit time over :) Tried to add new second paragraph:

The individual responses, that one loads to the drivers in main program, don't add any physical objects into the simulator and the responses won't react to each other "in physical domain" in the simulator like they would in reality (or in simulation like BEM), only the acoustic responses superimpose. But, the measurements (responses) would and should include the physical domain interaction already baked in! This is also what happens with Hifijim diffraction tool examples, the baffle is there for all the drivers and they all represent a transducer measured on the same baffle from different positions (on all three versions).

ps.
Here is also reason why manufacturer speaker measurement data does not represent reality, the measurements don't include your built speaker physically. Diffraction tool takes it a bit closer to reality, but it also is only simple approximation. There is only so much one can do in crossover network to affect response that happens acoustically (with the interaction with the physical construct) but there are some. And, in my view the VCAD should not be thought just as a crossover simulator because there is a lot more to learn from the system, mainly about the physical construct of your speaker and how it affects the response you are looking at. What you look at is what you hear, the whole system. Well, ok, to some extent, the effect of room is only an approximation at best and again left for mental visualization and user knowledge to figure out what part of the graphs has relevant info to perceived sound and what not :) CTA-2034-A standard is the most simplest reference, a good baseline, for this problem. Rest we have to learn one way or another and hopefully this particular thread would get some of that missing knowledge (at least I'm missing it) so we could perhaps take even more advantage of the VituixCAD, to really connect the graphs into perceived sound quality in our own applications, in reality.
 
Last edited:
Wouldn't putting the mic on axis for the creation of diffraction/enclosure model grant way more freedom?
That way one could experiment with placement if the crossover turns out to be "difficult to get right" by changing the exact x,y positions of the drivers.
It would make way more sense to me to grant myself more freedom to play with all variables during final crossover work.
 
^Yes except you thought it backwards I think, the coordinate system doesn't move the drivers, only the measurements move. This is why we'd wanna measure on axis of each driver to be able to move stuff around preserving the "resolution", even though this distorts the "virtual system structure" a bit. I mean, if mic was center of the DUT while transducer is not, the mic is at angle to the transducer but it doesn't know at which angle, all it does is measure the response. Of course user in charge knows this and perhaps is able to move and rotate the responses as see fit in the main program. Important thing is to keep the mental picture of the speaker while playing with the responses and coordinates. In each case the whole response gets measured from single mic location, only thing that differs is where the resolution is, accuracy of the data. For example if the DUT is rotated tweeter as center point, there won't be any accurate data beside the woofer, if you wan't to inspect the system from that angle for some reason. Usually the important data is just the x and y plane at design axis so there won't be too much error unless the drivers are scattered around :)

Note: as only the responses move with changing the coordinates, not the drivers or the structure, playing around with the coordinates in main program will yield error as the structure is now distorted (because there is no structure, only response of it). But, if one is clever with the structure then this is not a problem (like symmetrical minimal baffle connected with adjustable rods ;) ). Of course one can play with the distortion baked in because the baffle diffraction effect can be miniscule in comparison to how the various drivers interact with each other as system. This in mind, one can iterate with the process. First: Make some initial best guess set of measurements with diffraction tool, play around with coordinates/crossover in main window. If ended up with some other coordinates than initial measurements was made from, one can now make new measurements with new adjusted baffle and driver positions with the diffraction tool to get a bit more accurate simulated response. Rinse and repeat. Just remember any of these methods won't yield reality results.

Do this exact process with real DUT and measurements and you end up with very good system.
 
Last edited:
Well, there are several x,y,z positions available :). The mic in relation to the speaker, but also the x,y,z position of the driver itself.
But changing this last option will change diffraction effects in some cases as well, so one needs to be careful to choose an option and stick with it.

In a whole speaker building process it would be a good thing to build a prototype and measure the simulated stuff anyway.
Unless you are the writer of the software and know exactly what to expect. ;)
 
Yes, it might be confusing, takes some brain massage to get it visualized in the cortex :D

Basically one should forget positioning drivers, only the responses you've measured get positioned and the driver can be anywhere within the space captured by the measurements depending how you did the measurements. The measured responses, taken rotating the DUT around rotation axis in Y and X plane basically all point to a single location which the user is free to use, and using a driver center is the logical center point for set of measurements.

Things are easiest to handle (and visualize in the brain) if the mic is at driver center, rotation is baffle plane through the driver center and mic is exactly as far from baffle (rotation axis) for each driver (1m for example), like explained in the manual. If measurements are done like this then the coordinates can be thought as position of the drivers on the baffle in relation to listening axis (point on the baffle). Then it is easy to keep track whats going on and play with the coordinates without losing the mental picture of the system and maintain some kind of accuracy.

In addition to this X and Y coordinate hustle I've seen some confusion confusion with Z coordinate considering driver acoustic center. Z is not needed if done like in the manual. The acoustic centers are contained with the measurement data if the IR to FR conversion was made as per manual, using same window settings for all measurements and constant mic distance from rotation plane (baffle). Then the FR will include the real distance to driver acoustic center already and there is no need to worry about it.
 
Last edited:
Yes! But, final end result in mind, functioning loudspeaker system, playing with the diffraction tool is just academic exercise. It can give insight how stuff plays out and interacts so one is able to form these mental pictures of speakers and acoustic output to be able to imagine a good system to build and improve :D Here the absolute accuracy is not critical because it is not the end product. Even though one has to be carefully try and understand what is going on, what the data means and represents and how to interpret.

Basically, all there is to it is know and be confident what you are doing and use it as like you wanna. Hopefully my ramblings help everyone!
 
Last edited:
Give this video a try
Yes I did. They mention different things then what is said here.

They put distance to maximum.
They don't put the mic in the centre of the driver
They just said adjust location of driver and microphone....Location of driver is evident, but where to put the mic?
Listening axis i assume as they dont put it in the centre of the driver.
For the tweeter the put if off axis but in the driver screen they only adjust Y axis from mic position. why not X-axis too?
....

But this is the official help guide...so should be done like they said I assume.
 
Last edited:
Yes I did. They mention different things then what is said here.

They put distance to maximum.
They don't put the mic in the centre of the driver
They just said adjust location of driver and microphone....Location of driver is evident, but where to put the mic?
Listening axis i assume as they dont put it in the centre of the driver.
For the tweeter the put if off axis but in the driver screen they only adjust Y axis from mic position. why not X-axis too?
....

But this is the official help guide...so should be done like they said I assume.
Also not clear where the real measured or traced data come into the picture.

They start with enclosure tool and creat FRD and ZMA files based on the library parameters of the driver.
Then they use this FRD in the diffraction tool to create off axis responses and put this outcome in the driver FR screen.

Where they use the measured/traced data as input?
 
Enclosure tool can be used to get some bass box alignment that you then build. Bass boxes are measured nearfield and diffraction tool is used to add effect of possible bafflestep and get so e rough off-axis data. In addition to this hou might measure port output if it is ported bass. This can be added with the merge tool.

This is done like so because wavelenghts are so long bass boxes are almost impossible to measure at home, unless you have access to some large hall or can measure outside. Check out VituixCAD manual. Point is, you need some kind of data of the bass, real or faked :) After you are done, the responses are loaded into the main program, to a bass driver.
 
Nice!

I don't understand value for the traced driver data, perhaps good practice material for passive xo? But the result is not accurate for any real project so needs to be done again with real measured data eventually. One can just assume perfected crossover for given set of measurements is just matter of some time fiddling around and just use the ideal drivers instead, while waiting the measurement gear, power tools and sheet stock arrive :)
 
I don't understand value for the traced driver data, perhaps good practice material for passive xo?
It's another way to try and include some more information into a preliminary simulation to explore possible options. The only time this is a problem is if someone doesn't understand the limitations of the method.

If a driver has a significantly non flat response it could be useful to help estimate the sort of slopes or crossover parts that might be needed, beyond that idealized radiators work as well for exploring directivity and positioning.
 
A written guide here that might be easier to follow step by step

That is a good post. We were all new at this at one time... My experience with VituixCad is about 1 year... This should be helpful for people who are starting out.

I don't understand value for the traced driver data, perhaps good practice material for passive xo?

Yes, it is good practice. But it is also very helpful in doing feasibility studies. Before spending $$$ on drivers, it is comforting to see that a workable design is at least plausible. I think of it as building a virtual prototype.

Jim
 
I haven't even tried to learn VCAD, but I see T/S data key to a driver's low end, sensitivity and max spl performance. Directivity of a cone or dome always has many deviations from ideal driver (plate) or even from ABEC sim. But VCAD has enormous database of measured responses!

I do Edge sims almost daily. Basta is even more helpful, but needs a bit more (=too much) brainwork.
https://www.tolvan.com/index.php?page=/main/home.php
 
I've noticed many irregularities with ring radiators or recessed domes and deep profile cones. But yes, basically off-axis behaviour is roughly right in sims. Sort of "normalized to on-axis response"

Examples of unorthodox directivity from HifiCompass

1640773688307.png


1640774136336.png
1640773688307.png
1640774136336.png