USB DSP-DAC for Daphile
Hi,
has anyone been able to test it with Daphile? I would appreciate a short comment. Thanks!
Hi,
has anyone been able to test it with Daphile? I would appreciate a short comment. Thanks!
I have posted my initial release of the scripts used to stream audio from the host PC to the Pi, and an installation/directions/HOW-TO. These documents are attached.
It would be great for someone to try this out and then post here about it. If any problems arise please let me know.
Hello Charlie,
Many thanks for your script, it has helped me a lot!
I've been trying your script out with a Raspberry Pi Zero, coupled with a HifiBerry MiniAmp. I'm using old iMac's speakers that render a pretty good sound, given their size. Also I'm powering the whole directly from my Windows computer's USB and... it works very well!
I have two questions:
-> Do you allow me to publish your (slightly modified) scripts and HOW-TO for the rpi zero on a GitHub repo? I would of course credit you as you want for the scripts.
Sadly, I ran into a wall with the MiniAmp: I don't know yet how to set the volume. The MiniAmp doesn't have Software Control, so I added it thanks to this post, but of course it doesn't work.
-> With your gstreamer wisdom, do you know how I could set the volume of gstreamer dynamically, let's say with a little rotary controller like this one?
Attachments
Hello Charlie,
Many thanks for your script, it has helped me a lot!
I've been trying your script out with a Raspberry Pi Zero, coupled with a HifiBerry MiniAmp. I'm using old iMac's speakers that render a pretty good sound, given their size. Also I'm powering the whole directly from my Windows computer's USB and... it works very well!
I have two questions:
-> Do you allow me to publish your (slightly modified) scripts and HOW-TO for the rpi zero on a GitHub repo? I would of course credit you as you want for the scripts.
Yes, of course, please do! I'm glad that you like it enough to share it with others. That's what DIY is all about.
Sadly, I ran into a wall with the MiniAmp: I don't know yet how to set the volume. The MiniAmp doesn't have Software Control, so I added it thanks to this post, but of course it doesn't work.
-> With your gstreamer wisdom, do you know how I could set the volume of gstreamer dynamically, let's say with a little rotary controller like this one?
You cannot change anything about Gstreamer dynamically when using the command line invocation of Gstreamer. Instead, and what I do, is you can create a new ALSA softvol control for the input or output from your soundcard (if there is not one already). Then I control that via the command line using amixer. You could write a script that polls the rotary encoder for state changes and then sets the volume in a similar way.
Here is how to create the softvol:
This example creates a new control for capture only (See NOTES below):
Code:
pcm.mysoftvol {
type asym
firstone.pcm {
type softvol
slave.pcm "hw:CARD=Audio,DEV=0"
control.name "Gain Capture Volume"
control.card Audio
min_dB -50.0
max_dB 0.0
resolution 51
}
}
NOTES:
- I am adding a volume control to the ALSA soundcard with the name "Audio". You should substitute the name of the card on your system in these lines.
- Set resolution = 1 + max_dB - min_dB to get 1dB steps
- The control will appear in alsamixer and amixer with the name "Gain". Appending "Capture Volume" to the name you wish to see restricts the control to be for audio capture only. Appending "Playback Volume" to the name restricts that control to audio playback only. With no appended string, the control will act on both capture and playback simultaneously (you don't want that). If you want to have separate and independent capture AND playback softvol controls, copy everything under firstone.pcm, paste it back in under pcm.mysoftvol, and then change firstone.pcm to secondone.pcm in the newly pasted section.
- In order for the softvol control to appear in alsamixer or amixer, you have to force ALSA to use it first. I do this using speaker-test
Code:speaker-test -D hw:CARD=mysoftvol -c 2 -r 44100 -f S16_LE
To set the volume of a control using amixer I use this syntax:
Code:
amixer -D hw:CARD=Audio -- sset Gain XXX
Code:
charlie@ApolloLake-1:~$ amixer -D hw:CARD=PCH -- sset Master 45
Simple mixer control 'Master',0
Capabilities: pvolume pvolume-joined pswitch pswitch-joined
Playback channels: Mono
[COLOR="Red"]Limits: Playback 0 - 64[/COLOR]
Mono: Playback 45 [70%] [-19.00dB] [on]
Any problems or questions, just post again here or PM me for help.
Have fun!
Or running the gstreamer pipeline from python. This is a simple (crude) example for starting the pipeline where the volume element has a specific name, passing the pipeline as parameter to a new thread which can modify the volume by finding the element in the pipeline and changing its volume property. The thread could easily wait for encoder change and increment/decrement volume as needed.
Also many other things could be done quite easily (reconfiguring the pipeline, catching messages and events, etc.). I really recommend trying the dynamic gst way, the existing pipeline can be used with Gst.parse_launch and access to individual elements via pipeline.get_by_name(). Catching gstreamer messages, events, everything surprisingly easy to debug e.g. in Pycharm with breakpoints, watches and code evaluation.
Also many other things could be done quite easily (reconfiguring the pipeline, catching messages and events, etc.). I really recommend trying the dynamic gst way, the existing pipeline can be used with Gst.parse_launch and access to individual elements via pipeline.get_by_name(). Catching gstreamer messages, events, everything surprisingly easy to debug e.g. in Pycharm with breakpoints, watches and code evaluation.
Code:
from threading import Thread
from time import sleep
import gi
gi.require_version('Gst', '1.0')
from gi.repository import Gst
VOL_ELEM_NAME = 'volume_element'
VOLUME_PROP = 'volume'
def thread_function(pipeline):
while True:
global stop_threads
if stop_threads:
break
volume_elem = pipeline.get_by_name(VOL_ELEM_NAME)
volume = volume_elem.get_property(VOLUME_PROP)
print("Current volume: %f" % volume)
volume += 0.1
volume_elem.set_property(VOLUME_PROP, volume)
sleep(1)
Gst.init(None)
# build the pipeline
pipeline = Gst.parse_launch(
"audiotestsrc ! volume name=%s volume=0.0 ! level ! fakesink silent=TRUE" % VOL_ELEM_NAME)
# start playing
pipeline.set_state(Gst.State.PLAYING)
stop_threads = False
thread = Thread(target=thread_function, args=(pipeline,))
thread.start()
# wait until EOS or error
bus = pipeline.get_bus()
bus.add_signal_watch()
msg = bus.timed_pop_filtered(
Gst.CLOCK_TIME_NONE,
Gst.MessageType.ERROR | Gst.MessageType.EOS
)
if msg:
t = msg.type
if t == Gst.MessageType.ERROR:
err, dbg = msg.parse_error()
print("ERROR:", msg.src.get_name(), " ", err.message)
if dbg:
print("debugging info:", dbg)
elif t == Gst.MessageType.EOS:
print("End-Of-Stream reached")
else:
# this should not happen. we only asked for ERROR and EOS
print("ERROR: Unexpected message received.")
# free resources
pipeline.set_state(Gst.State.NULL)
stop_threads = True
thread.join()
Just an example of a simple code snippet which tracks mouse clicks inside a video shown from a camera (a probe is added to one of the element pads, filtering upstream events, the event info is passed to on_event function which filters NAVIGATION events and passes mouse click co-ordinates to function mouse_clicked for sending zoom-in and zoom-out HTTP commands to the camera). It took me a while to figure out the structures to get to the event but once the internals are revealed the actual control of gstreamer is trivial.
Code:
def on_event(pad, info):
# this is to enable pycharm breakpoints in thread started from C
pydevd.settrace(suspend=False, trace_only_current_thread=True)
event = info.get_event()
type = event.type
if type == Gst.EventType.NAVIGATION:
struct = event.get_structure()
if struct.get_string('event') == 'mouse-button-press':
debug(struct)
mouse_clicked(struct.get_double("pointer_x").value, struct.get_double("pointer_y").value)
return Gst.PadProbeReturn.OK
Gst.init(None)
# build the pipeline
pipeline = Gst.parse_launch(
"rtspsrc location=rtsp://%s:%s@%s:554?channel=0 latency=1 ! rtph264depay ! h264parse ! vaapih264dec low-latency=1 name=%s ! vaapisink fullscreen=1" % (
USER, PASSWD, CAM_ADDR, EVENT_BIN_NAME)
)
# start playing
pipeline.set_state(Gst.State.PLAYING)
# for some reason no events come from the vaapisink bin (first in the list), but the second bin (vaapih264dec) works OK
bin = pipeline.get_by_name(EVENT_BIN_NAME)
# sink = 0, src = 1
pad = bin.pads[0]
pad.add_probe(Gst.PadProbeType.EVENT_UPSTREAM, on_event)
# wait until EOS or error
bus = pipeline.get_bus()
bus.add_signal_watch()
msg = bus.timed_pop_filtered(
Gst.CLOCK_TIME_NONE,
Gst.MessageType.ERROR | Gst.MessageType.EOS
)
if msg:
t = msg.type
if t == Gst.MessageType.ERROR:
err, dbg = msg.parse_error()
print("ERROR:", msg.src.get_name(), " ", err.message)
if dbg:
print("debugging info:", dbg)
elif t == Gst.MessageType.EOS:
print("End-Of-Stream reached")
else:
# this should not happen. we only asked for ERROR and EOS
print("ERROR: Unexpected message received.")
# free resources
pipeline.set_state(Gst.State.NULL)
Thank you very much for your long and quick answers!
Charlie, your solution helped me a lot, and allowed me to understand ALSA and softvol a little better. I now have softvol working and will start working on the code to control volume from the rotary controller.
phofman, although I like your solution very much (and I'm a Python dev), as I'm on a Raspberry Pi Zero, I am afraid to go in this direction and find out it's not powerful enough to run all code at the same time.
Currently the CPU is overloaded when I run Gstreamer and I access via SSH, so I know there is not much overhead.
Gstreamer takes about 75-90% CPU when I send it some data.
I'll continue and keep you posted.
Regards
Charlie, your solution helped me a lot, and allowed me to understand ALSA and softvol a little better. I now have softvol working and will start working on the code to control volume from the rotary controller.
phofman, although I like your solution very much (and I'm a Python dev), as I'm on a Raspberry Pi Zero, I am afraid to go in this direction and find out it's not powerful enough to run all code at the same time.
Currently the CPU is overloaded when I run Gstreamer and I access via SSH, so I know there is not much overhead.
Gstreamer takes about 75-90% CPU when I send it some data.
I'll continue and keep you posted.
Regards
The volume python code has almost no overhead, only for catching the EOF and error messages which is not compulsory, if you can exit the script in other way.
But gstreamer, just like any other audio player, should not take any major CPU unless resampling which should be avoided. IMO the reason of your CPU load should be investigated and fixed.
But gstreamer, just like any other audio player, should not take any major CPU unless resampling which should be avoided. IMO the reason of your CPU load should be investigated and fixed.
very happy with the script but I don't understand why my pi only accept "plughw" instead of the advised "hw".
I reckon it's something to do with resampling but I can't understand why.
Any help is more than welcome
I reckon it's something to do with resampling but I can't understand why.
Any help is more than welcome
hw:X is directly the soundcard driver which supports only a limit set of combinations of channel count, sample size, and sample rate. An unsupported combination is refused with an error. The alsa plug plugin (inserted into the chain by using plughw:X device name) does all minimum necessary conversions to convert the requested parameters of the stream to the accepted parameters of the soundcard driver.
You can list your accepted soundcard params e.g. by running
The necessary conversions performed by the plug plugin to convert parameters of your.wav to those supported by the driver are listed in verbose mode of aplay (param -v):
You can list your accepted soundcard params e.g. by running
Code:
aplay --dump-hw-params -D hw:X /dev/zero
The necessary conversions performed by the plug plugin to convert parameters of your.wav to those supported by the driver are listed in verbose mode of aplay (param -v):
Code:
aplay -v -D plughw:X your.wav
Thanks you phofman for your help
Here what i got :
if I understand correctly, I should use S32_LE to use hw:X parameter and play directly without resampling.
Trouble is that VB-Audio Cable is limited to 24bits and cant' play 32bit.
Any idea ?
Here what i got :
Code:
aplay --dump-hw-params -D hw:0 /dev/zero
FRAME_BITS: 64
CHANNELS: 2
RATE: [44100 768000]
PERIOD_TIME: (41 743039)
PERIOD_SIZE: [32 32768]
PERIOD_BYTES: [256 262144Playing raw data '/dev/zero' : Unsigned 8 bit, Rate 8000 Hz, Mono
HW Params of device "hw:0":
--------------------
ACCESS: MMAP_INTERLEAVED RW_INTERLEAVED
FORMAT: S32_LE DSD_U32_LE
SUBFORMAT: STD
SAMPLE_BITS: 32
FRAME_BITS: 64
CHANNELS: 2
RATE: [44100 768000]
PERIOD_TIME: (41 743039)
PERIOD_SIZE: [32 32768]
PERIOD_BYTES: [256 262144]
PERIODS: [2 2048]
BUFFER_TIME: (83 1486078)
BUFFER_SIZE: [64 65536]
BUFFER_BYTES: [512 524288]
TICK_TIME: ALL
--------------------
aplay: set_params:1339: Sample format non available
Available formats:
- S32_LE
- DSD_U32_LE
if I understand correctly, I should use S32_LE to use hw:X parameter and play directly without resampling.
Trouble is that VB-Audio Cable is limited to 24bits and cant' play 32bit.
Any idea ?
Trouble is that VB-Audio Cable is limited to 24bits and cant' play 32bit.
VB Audio Cable is a windows app, how is it related to your linux playback?
if I understand correctly, I should use S32_LE to use hw:X parameter and play directly without resampling.
Trouble is that VB-Audio Cable is limited to 24bits and cant' play 32bit.
Any idea ?
If the plug is just changing the bit depth from 32 to 24 or vice versa I would not really worry about it too much.
You could look for another way to loopback under Windows that can support 32 bits. Examples:
LoopBeAudio - A Virtual Audio Loopback Device
Tutorial - Recording Computer Playback on Windows - Audacity Manual
Hi CharlieLaub,
Thanks for your help.
Tried LoopBeAudio at 32 bit but the script doesn't want to start neither on the windows machine nor on the pi
modified script on the pi :
when launched on the pi
gstreamer pipeline doesn't want to start
On my windows machine, the modified script :
on the windows machine screen :
Gstreamer doesn't want to start.
Not a great success so far
Thanks for your help.
Tried LoopBeAudio at 32 bit but the script doesn't want to start neither on the windows machine nor on the pi
modified script on the pi :
Code:
#program parameters. user may edit these.
IP_address=192.168.0.11
bit_depth=32
sample_rate=44100
output_audio_format=S32LE
ALSA_output_device=plughw:0,0
when launched on the pi
Program execution began at: 01/04/21 16:59:25
01/04/21 16:59:26 new status:
IP: up, receiving audio data, gstreamer not running
action: gstreamer pipeline launched at 01/04/21 16:59:26
action: gstreamer pipeline launched at 01/04/21 16:59:29
action: gstreamer pipeline launched at 01/04/21 16:59:33
gstreamer pipeline doesn't want to start
On my windows machine, the modified script :
REM the bit depth of the audio stream (value must be 16 or 24):
SET bit_depth=32
REM the IP address to which the audio data will be streamed:
SET destination=192.168.0.16
REM set to NUL if no output file is desired
SET send_gstreamer_output_to=NUL
REM duration of output buffer in milliseconds
SET output_buffering=60
on the windows machine screen :
Program started at 04/01/2021 18:16:57.65
-------------------------------------------------------------
destination IP 192.168.0.16 can be reached. Launching gstreamer at 04/01/2021 18:17:00.05
destination IP 192.168.0.16 can be reached. Launching gstreamer at 04/01/2021 18:17:04.91
destination IP 192.168.0.16 can be reached. Launching gstreamer at 04/01/2021 18:17:23.76
destination IP 192.168.0.16 can be reached. Launching gstreamer at 04/01/2021 18:17:27.79
Gstreamer doesn't want to start.
Not a great success so far
Ad 24 vs. 32 bits - it really does not matter whether windows sends 24 or 32bits, when 24bits is the maximum available contents/information. No need to struggle for 32bits on the windows side.
What counts is the 32bit format entering the driver - easily provided by the alsa sink config at the very last stage. Very likely the pipeline itself uses a fixed format internally, changing the incoming format as required.
IMO just changing the output format definition in the script from S24LE to S32LE for this particular soundcard would do:
What counts is the 32bit format entering the driver - easily provided by the alsa sink config at the very last stage. Very likely the pipeline itself uses a fixed format internally, changing the incoming format as required.
IMO just changing the output format definition in the script from S24LE to S32LE for this particular soundcard would do:
Code:
output_audio_format=S32LE
- Home
- Source & Line
- PC Based
- using a Raspberry Pi 4 as a USB DSP-DAC