CamillaDSP - Cross-platform IIR and FIR engine for crossovers, room correction etc.

A question regarding levels (digital) and headroom with CamillaDSP:

My current setup are a 3-Way system where:
  • I have an intel-based-PC with Ubuntu and running CamillaDSP as service as a media- and streaming device .
  • Sound signal are via HDMI to a 7.1 receiver
  • The receiver are DAC and amplifier in PCM-mode.
  • The config are more or less the 2-Way example that Henrik published, modified to 3-Way.
    • Gain for Tweeter: -6
    • Gain for mid: -4
    • Gain for bas: 0
    • To level out the elements.

As there are several points of adjusting the levels, do I use the following settings:
  • In the streaming apps (Spotify, Netflix, Youtube etc) sound level set to max.
  • The system level (Ubuntu) are controlled by a BT-device. Typical 25-100% depending on where I'm in the house.
  • The receiver are set to fixed -20dB in normal conditions, (-15dB in Party-mode).

My questions:
How will filters as peaking and shelf behave when I change the system setting from 50% -> 100%. Are there enough headroom in the digital signal to the receiver for a +6dB peak filter or will the signal clipp when I go towards 100% ?
The same question if I use Linkwitz Transform for the bass, in theory can the gain be +12dB at low frequency's. When will the digital signal be a bottle neck?
Are there some kind of self leveling in CamillaDSP to avoid digital clipping due to filters which add gain?

If this are an issue:
What can be done to prevent clipping and bottleneck?
- Set the gain lower in the config and then compensate the setting at the receiver?
When I set the receiver to higher output (-15dB or higher) do I get clicking sounds in the speaker when changing apps etc.
- Not wanted, what can be done about it?
 
Is there now perhaps decimal dB adjustment possibility in the mixer settings per channel?
Yes that should work.

Mixer tab - would it be possible with a description per Destination? One, that follows on to the Pipeline...
E.g, for a 2 way actives 4 destinations... (0..3)

Hi-Left
Low-Left
Hi-Right
Low-Right
I would like to add something like that. But it will probable take a while, it's not at the top of my to-do list.
 
  • Like
Reactions: 1 users
A question regarding levels (digital) and headroom with CamillaDSP:

My current setup are a 3-Way system where...

My questions:
How will filters as peaking and shelf behave when I change the system setting from 50% -> 100%. Are there enough headroom in the digital signal to the receiver for a +6dB peak filter or will the signal clipp when I go towards 100% ?
The same question if I use Linkwitz Transform for the bass, in theory can the gain be +12dB at low frequency's. When will the digital signal be a bottle neck?
Are there some kind of self leveling in CamillaDSP to avoid digital clipping due to filters which add gain?

If this are an issue:
What can be done to prevent clipping and bottleneck?
- Set the gain lower in the config and then compensate the setting at the receiver?
I will let @HenrikEnquist confirm this, but typically computer DSP processing has virtually unlimited headroom. This is quite different compared to hardware DSP or analog systems. This is because the audio signal is on the order of 1 and is typically represented by a 32-bit floating point number (sometimes 64 bits) that can take on a much, much larger range of values even before roundoff error in the floating point representation becomes a problem. So peaking in filters really doesn't matter. For example to test this theory, you can put two processing blocks in series: the first has some huge (by audio standards) amount of gain like +100dB, and the subsequent one has the reverse, e.g. -100dB. The combination of the gain changes should not affect the audio signal.

From my experience, only when the signal is turned over to ALSA and then rendered to the DAC does the signal level matter, and for computer audio it is usually required to be within the range -1.0 to 1.0. But during earlier processing steps, unless the software (e.g. CamillaDSP) imposes some limit internally, there is no limit to the signal amplitude.
 
Last edited:
How will filters as peaking and shelf behave when I change the system setting from 50% -> 100%. Are there enough headroom in the digital signal to the receiver for a +6dB peak filter or will the signal clipp when I go towards 100% ?
The same question if I use Linkwitz Transform for the bass, in theory can the gain be +12dB at low frequency's
If the level in to camilladsp can be full scale, then you need to make sure that you don't have over 0 dB of gain at any frequency. Ideally you should have a margin of a few dB. 3dB is a good start.
So a +6dB peaking filter should be accompanied by a -9dB gain. Without that it probably won't clip very often, but it will happen from time to time. How often depends on what music you listen to and at what frequency the peak is. It may be fine, a small number of clipped samples is mostly inaudible. Check the camilladsp logs or the gui, they tell you when it clips.
Linkwitz transforms tend to clips quite badly if you don't attenuate. There are often large amplitude signals at low frequencies, so you will get severe clipping that isn't fun to listen to.

No idea about the pops when switching apps.

Edit after seeing the reply by @CharlieLaub
Camilladsp will never clip internally, only at the output when samples are converted to the output format. Inside the pipeline it's just as Charlie describes, you can let the level go as high as you want, as long it's back to reasonable levels at the end.
 
Last edited:
It looks like compiling with "codegen-units" set to 1 avoids the LLVM bug. This means the compiler doesn't split the code into smaller units that are compiled in parallel. The downside is that it increases build times, but the advantage is that it enables some more optimization. I haven't compared before since I didn't expect much difference, but it actually does give a meaningful improvement of about 10%. I checked by measuring how long it takes to process a given number of samples from /dev/urandom to /dev/null.
With the default setting (16 codegen units) it takes 3.05 seconds on average.
With 1 codegen unit it drops to 2.65 seconds.

Use this command to build:
RUSTFLAGS="-C codegen-units=1" cargo build --release

I think that a 10% of extra speed is worth waiting a bit extra for when building (1 min 48 seconds vs 1 minute 20 seconds on my old Ryzen laptop). I will add this to the cargo configuration to make it the default.
 
  • Like
Reactions: 2 users
@HenrikEnquist Found a bug. I doubt many people will use dither though...
 

Attachments

  • 2023-10-5 1-39-14.png
    2023-10-5 1-39-14.png
    21 KB · Views: 50
IIUC the pause just results in no chunks being delivered => no play_buffer() calls, no explicit pause of the alsa device is issued. Then IMO it's logical that upon resuming delivery of chunks the initial status of the device will be xrun - the already delivered samples were already consumed and no new samples arrived.

The IMO interesting outcome is when no xrun occurs after the pause. How is that the samples were not consumed by the running soundcard during the pause? If it's really the case, maybe the USB packet size vs. URB count (packets filled in one alsa period) play a role.

But this seems only for the sake of completely understanding why it happens (which I find useful for myself), the practical effect is likely none.

Slight update to this.

The Okto is definitely doing something weird related to the PAUSED state. If I remove the silence timeout the CamillaDSP reported buffer level stays roughly equivalent to chunk size. However, if the silence timeout is used when it goes to PAUSED it greatly reduces the buffer size down to 100-200 and it seems to stay at this level after future PAUSED / RUNNING transitions. Obviously, this has a pretty big impact on latency. In either case I get no reported buffer underruns.

Michael
 
I'm having an issue that I think is probably something obvious that I'm missing. I got CamillaDSP 1.0.3 up and running on Windows, with GUI backend and frontend. System is Windows 10 on Ryzen 5800X3D, output to VB-Audio virtual cable, into CamillaDSP, out to Focusrite USB interface. Basic functionality seems to work great (e.g. apply a 100Hz low-pass filter and watch/listen to something, obviously works). But when I apply a (supposedly) minimum phase convolution filter (.wav file), I'm getting delay in audio output that seems about equal to half the filter length. I've tried a few filter .wav files, they all seem to have the same effect. For example just generating a low-pass Butterworth filter in rePhase on the "Minimum Phase Fitlers" tab, then using the .wav file in CamillaDSP. Maybe this is a question for the rePhase thread?
 
If I remove the silence timeout the CamillaDSP reported buffer level stays roughly equivalent to chunk size. However, if the silence timeout is used when it goes to PAUSED it greatly reduces the buffer size
There is some logic in camilladsp to try to get the delay constant. It's basically makesan estimate for how long it needs to sleep before starting the playback device to get the wanted delay. That makes some assumptions about how devices behave, so it won't always work.
You could try setting the target buffer level lower, could maybe help in getting is more consistent.
 
  • Like
Reactions: 1 user
Yep, here's a 200Hz low-pass generated in rePhase, exactly in the middle of the file. What should I do?
Can you ask rePhase to trim the IR? A full second is very very long, and all the zeros before that bump only add delay (and eat CPU time).
You can also trim manually in audacity. Whatever you do, just make sure to trim all filters equally.
 
  • Like
Reactions: 1 user