Let's consider a BeagleBone Black instead.
Beagleboard:BeagleBoneBlack - eLinux.org
Let's consider USB Audio. The BeagleBone Black can be a very good audio source, this time on USB. This, at the condition that the BeagleBone Black runs the proper software driver, taking advantage of the USB2 asynchronous audio modality.
The Asus Xonar U7 soundcard is very good at this. It supports the USB2 asynchronous audio modality. Inside, there is a quartz oscillator, exploited as high quality audio clock master. Everybody should rely on such setup. And this is a 8-channel soundcard. Which means that the day the BeagleBone Black can run JRiver Media Center for Linux, and persuade JRiver Media Center to output the audio to a VST that's emulated, such VST can act as stereo 4-way crossover, sending the 8-channel audio over USB, to the Asus Xonar U7 soundcard, for implementing a stereo 4-way crossover.
Look how superior the BeagleBone Black is.
Schematic is here
From I2S ?
From a SPDIF to I2S converter ?
Albeit not stupid, if the aim is to play vinyles in realtime.
Requires a high quality turntable, high quality cartridge, and high quality preamp.
Will be exposed to acoustic feedback.
Quite doubtful, thus.
Now look, there is a way to use the BeagleBone Black I2S, that has not yet been investigated a lot.
It consists on taking the BeagleBone Black for what is it : a CPU that's quite fast, but unable to produce delicate things like a high quality MCLK, and a high quality frame-sync operating a 44.1 kHz and 48 kHz.
Consider configuring the BeagleBone Black 2 I2S as slave.
Even not dealing with a MCLK.
Say there is audio coming from a CD/DVD/Bluray player, through SPDIF.
Say you connect a SPDIF to I2S converter.
It will output the I2S frame-sync, bit clock, and audio data.
The SPDIF to I2S converter acts thus as I2S master, and audio clock master.
The Raspberry pi 2 reads such audio, as I2S slave.
The Raspberry pi 2 returns processed audio, still as I2S slave.
And the DAC (possibly an assembled PCM5102A DAC) reads such audio, also as slave.
The whole stuff gets thus sequenced by a sole device : the SPDIF to I2S converter.
The end result will only depend on the quality of the frame-sync that's delivered by the SPDIF to I2S converter.
And now, here is the good news, the BeagleBone Black CPU features several I2S, actually they are McASP (Multichannel Audio Serial Port) supporting I2S and TDM, the latter being a kind of 8-channel I2S. We can thus implement a stereo 4-way crossover ... provided the BeagleBone Black PCB routes all McASP signal lines to the 2x 46 pin headers.
Another arrangement that's conceivable, is to have the same ambition as JMF11. Transforming the BeagleBone Black into a high quality USB soundcard. It should emulate some reputated USB2 Async soundcard, say the miniDSP miniStreamer.
There shall be a quartz oscillator as audio clock master, generating MCLK = 256 x Fs.
There shall be a DAC reading such MCLK.
Such DAC to operate as I2S master.
The BeagleBone Black operating as I2S slave, not even touching MCLK, but receiving the frame-sync and the bit clock.
The BeagleBone Black requesting audio packets through USB.
The BeagleBone Black outputting stereo audio on I2S slave.
Okay, this can work. At the condition that one succeed in emulating a miniDSP miniStreamer. There is a big programming effort required.
And now, here is the good news, the BeagleBone Black CPU features several I2S, actually they are McASP (Multichannel Audio Serial Port) supporting I2S and TDM, the latter being a kind of 8-channel I2S. We can thus implement a stereo 4-way crossover ... provided the BeagleBone Black PCB routes all McASP signal lines to the 2x 46 pin headers.
Provided the BeagleBone Black PCB routes all McASP signal lines to the 2x 46 pin headers, the BeagleBone Black appears thus much better than the Raspberry pi 2 what's regarding multichannel audio DSP.
Fortunately there is the Sitara_Linux_Audio_DAC_Example from T.I.
Unfortunately, such T.I. example is basing on Ubuntu 12.04, and borrowing heavily from a driver called the ALSA SoC SPDIF DIT driver.
In such arrangement, the McASP is responsible for generating the bit clock and frame sync.
That's not a qualitative approach. Where is the 256 x Fs "rock solid" clock? Can the T.I. example cope with 44.1 kHz audio and 48 kHz audio?
I thus full support JMF11 in the search for an answer to his query : can low jitter be achieved with STM32 microcontroller?
You may have not noticed, the STM32 enables us to program it bare metal in C language and possibly in assembler also in some critical code sections, enabling us to escape from the Linux and Alsa bloat and artificial limitations. This is fantastic value. It is worth doing the effort. And when the result will show, it immediately will become a de-facto standard for qualitative audio DSP, from A to Z.
Regards,
Steph
Beagleboard:BeagleBoneBlack - eLinux.org
Let's consider USB Audio. The BeagleBone Black can be a very good audio source, this time on USB. This, at the condition that the BeagleBone Black runs the proper software driver, taking advantage of the USB2 asynchronous audio modality.
The Asus Xonar U7 soundcard is very good at this. It supports the USB2 asynchronous audio modality. Inside, there is a quartz oscillator, exploited as high quality audio clock master. Everybody should rely on such setup. And this is a 8-channel soundcard. Which means that the day the BeagleBone Black can run JRiver Media Center for Linux, and persuade JRiver Media Center to output the audio to a VST that's emulated, such VST can act as stereo 4-way crossover, sending the 8-channel audio over USB, to the Asus Xonar U7 soundcard, for implementing a stereo 4-way crossover.
Look how superior the BeagleBone Black is.
Schematic is here
1Ghz, Single-core CPU
512MB RAM
USB client for power & communications
USB host
Ethernet not squatting USB
2x 46 pin headers
$60 maybe including shipment
Wait a moment, from where JRiver Media Center is supposed to take the audio? Please tell me ...512MB RAM
USB client for power & communications
USB host
Ethernet not squatting USB
2x 46 pin headers
$60 maybe including shipment
From I2S ?
From a SPDIF to I2S converter ?
Wait a moment, we know that the Asus Xonar U7 is going to operate as audio clock master, regularly requesting audio samples.
That's plain incompatible with a SPDIF source, converted to a I2S source, connected on the HAT-compatible 40-pin header.
The two audio clock domains are uncorrelated.
We are kaput.
From Ethernet or WiFi perhaps, grabbing streamed audio coming from internet (Spotify, a NAS, etc.)?That's plain incompatible with a SPDIF source, converted to a I2S source, connected on the HAT-compatible 40-pin header.
The two audio clock domains are uncorrelated.
We are kaput.
Yes, this is a recommended method as the BeagleBone Black Ethernet is properly implemented, not squatting the USB.
Another solution is to grab audio from the Asus Xonar U7 analog Line-In. Quite a shame.Albeit not stupid, if the aim is to play vinyles in realtime.
Requires a high quality turntable, high quality cartridge, and high quality preamp.
Will be exposed to acoustic feedback.
Quite doubtful, thus.
Now look, there is a way to use the BeagleBone Black I2S, that has not yet been investigated a lot.
It consists on taking the BeagleBone Black for what is it : a CPU that's quite fast, but unable to produce delicate things like a high quality MCLK, and a high quality frame-sync operating a 44.1 kHz and 48 kHz.
Consider configuring the BeagleBone Black 2 I2S as slave.
Even not dealing with a MCLK.
Say there is audio coming from a CD/DVD/Bluray player, through SPDIF.
Say you connect a SPDIF to I2S converter.
It will output the I2S frame-sync, bit clock, and audio data.
The SPDIF to I2S converter acts thus as I2S master, and audio clock master.
The Raspberry pi 2 reads such audio, as I2S slave.
The Raspberry pi 2 returns processed audio, still as I2S slave.
And the DAC (possibly an assembled PCM5102A DAC) reads such audio, also as slave.
The whole stuff gets thus sequenced by a sole device : the SPDIF to I2S converter.
The end result will only depend on the quality of the frame-sync that's delivered by the SPDIF to I2S converter.
And now, here is the good news, the BeagleBone Black CPU features several I2S, actually they are McASP (Multichannel Audio Serial Port) supporting I2S and TDM, the latter being a kind of 8-channel I2S. We can thus implement a stereo 4-way crossover ... provided the BeagleBone Black PCB routes all McASP signal lines to the 2x 46 pin headers.
Another arrangement that's conceivable, is to have the same ambition as JMF11. Transforming the BeagleBone Black into a high quality USB soundcard. It should emulate some reputated USB2 Async soundcard, say the miniDSP miniStreamer.
There shall be a quartz oscillator as audio clock master, generating MCLK = 256 x Fs.
There shall be a DAC reading such MCLK.
Such DAC to operate as I2S master.
The BeagleBone Black operating as I2S slave, not even touching MCLK, but receiving the frame-sync and the bit clock.
The BeagleBone Black requesting audio packets through USB.
The BeagleBone Black outputting stereo audio on I2S slave.
Okay, this can work. At the condition that one succeed in emulating a miniDSP miniStreamer. There is a big programming effort required.
And now, here is the good news, the BeagleBone Black CPU features several I2S, actually they are McASP (Multichannel Audio Serial Port) supporting I2S and TDM, the latter being a kind of 8-channel I2S. We can thus implement a stereo 4-way crossover ... provided the BeagleBone Black PCB routes all McASP signal lines to the 2x 46 pin headers.
Provided the BeagleBone Black PCB routes all McASP signal lines to the 2x 46 pin headers, the BeagleBone Black appears thus much better than the Raspberry pi 2 what's regarding multichannel audio DSP.
Fortunately there is the Sitara_Linux_Audio_DAC_Example from T.I.
Unfortunately, such T.I. example is basing on Ubuntu 12.04, and borrowing heavily from a driver called the ALSA SoC SPDIF DIT driver.
In such arrangement, the McASP is responsible for generating the bit clock and frame sync.
That's not a qualitative approach. Where is the 256 x Fs "rock solid" clock? Can the T.I. example cope with 44.1 kHz audio and 48 kHz audio?
I thus full support JMF11 in the search for an answer to his query : can low jitter be achieved with STM32 microcontroller?
You may have not noticed, the STM32 enables us to program it bare metal in C language and possibly in assembler also in some critical code sections, enabling us to escape from the Linux and Alsa bloat and artificial limitations. This is fantastic value. It is worth doing the effort. And when the result will show, it immediately will become a de-facto standard for qualitative audio DSP, from A to Z.
Regards,
Steph
Last edited:
Hi,
as you said, RPI2 can work as I2S which is good.
RPI2 can work as I2S slave which may reduce jitter.
To reduce the jitter, maybe another idea around packet sending and buffer:
It should be possible to send a packet of let say 64 PCM values through the I2S port aligned with the sample clock for the start of the sending, but faster than the sample speed for the sending of the values (let say around 10% faster). So, there is a "fast" sending then a waiting time, a sending, a waiting time, etc...
It should be possible to work with working buffers in STM32:
1/ read buffers. This ones are not sync by sample value, but by sample packets. It should be possible to manage several input packet buffers inside the STM32 to minimize the ms delay, or to manage a single packet buffer aligned with DSP code for simplification (but with more ms delay).
2/ 'working' buffer (FIR,...)
3/ sending buffer, aligned with the correct output clock.
With these 3 big types of buffers, I think the delay may be (1/f)*2*sample packet size (without DSP functions) => 3ms at 44khz
But with real time DSP it should not add delay to the DSP functions if the frames are bigger than the input packet size.
as you said, RPI2 can work as I2S which is good.
RPI2 can work as I2S slave which may reduce jitter.
To reduce the jitter, maybe another idea around packet sending and buffer:
It should be possible to send a packet of let say 64 PCM values through the I2S port aligned with the sample clock for the start of the sending, but faster than the sample speed for the sending of the values (let say around 10% faster). So, there is a "fast" sending then a waiting time, a sending, a waiting time, etc...
It should be possible to work with working buffers in STM32:
1/ read buffers. This ones are not sync by sample value, but by sample packets. It should be possible to manage several input packet buffers inside the STM32 to minimize the ms delay, or to manage a single packet buffer aligned with DSP code for simplification (but with more ms delay).
2/ 'working' buffer (FIR,...)
3/ sending buffer, aligned with the correct output clock.
With these 3 big types of buffers, I think the delay may be (1/f)*2*sample packet size (without DSP functions) => 3ms at 44khz
But with real time DSP it should not add delay to the DSP functions if the frames are bigger than the input packet size.
Agree. Now look. The RPI features only one full-duplex I2S.Hi, as you said, RPI2 can work as I2S which is good. RPI2 can work as I2S slave which may reduce jitter.
Now consider the STM32 Nucleo boards. The STM32F446RE Nucleo, STM32F446ZE Nucleo, and STM32F746ZG Nucleo feature 3 x I2S et 2 x SAI. They are thus superior as audio DSP hardware platforms. On top of this, it is feasible to program them bare metal, possibly in assembler.
This being said, I deeply regret the messy exemplative software in C language that ST has provided. Thanks to the STM32CubeMX configuration utility, the most recent ST examples tend to look better. Allow another 6 month, and we'll be able to base on decent, structured code, optimally exploiting the STM32CubeMX configuration utility.
A STM32F7 running a Basic interpreter may help communicating with a STM32F7 Nucleo Board, making it behaving like an oldschool Basic computer interacting with you using a Serial Console. In such context, the audio DSP routines shall not be interpreted by the on-board Basic. Instead they shall be treated as pre-compiled software payloads, some to sit after the chip initialization, some to sit as SPI or SAI interrupt service routines.
I'm talking about a Basic interpreter, but as a first step, one can simplify it to the max, only implementing the PEEK and the POKE instructions.
This way, you can edit the FIR filters coefficients, and the IIR biquad filters coefficients, without needing a full blown integrated development environment like System Workbench for STM32, the free STM32 integrated development environment. Instead, you operate using the Serial Console, possibly wireless using a Serial-BLE adapter (https://www.sparkfun.com/products/13729), or a Serial-Wifi adapter (https://www.sparkfun.com/products/13678).
Can somebody help implementing the very beginning of a STM32F7 Basic interpreter, only featuring the PEEK and POKE instructions? Kind of ARMbasic Help
Last edited:
Agree. Now look. The RPI features only one full-duplex I2S.
Now consider the STM32 Nucleo boards. The STM32F446RE Nucleo, STM32F446ZE Nucleo, and STM32F746ZG Nucleo feature 3 x I2S et 2 x SAI. They are thus superior as audio DSP hardware platforms. On top of this, it is feasible to program them bare metal, possibly in assembler.
Just to precise that in my understanding, each SAI can manage 2 serial lines = 2 I2S or SPDIF, or TDM...
So all this could add up to 7 I2S... Not too bad.
JMF
Can somebody help implementing the very beginning of a STM32F7 Basic interpreter, only featuring the PEEK and POKE instructions?
Maybe I'll find time for that. I hope next week.
In all cases, you need a client and a server.
I can propose 2 logics:
1/ the server communicate in 'english', like name of function, parameter name, value, etc...
It is a good way to have less work with versionning. If you improve the code it is more transparent.
2/ the server communicate with codes. For example, 1 is the code for the first function, etc....
This is good to use less memory, but sometime you can not not upgrade the server code without the client.
what can be your favorite option?
Maybe I'll find time for that. I hope next week.
You can see what byte is at a given address using the function PEEK. For example, this program prints out the first 21 bytes in the ROM (& their addresses).
10 PRINT "ADDRESS";TAB 8;"BYTE"
20 FOR A = 0 TO 20
30 PRINT A;TAB 8;PEEK A
40 NEXT A
Type
POKE 17300,57
This makes the byte at address 17300 have the value 57. If you now type
PRINT PEEK 17300
you get your number 57 back. (Try poking in other values, to prove that there's no cheating.)
Note that the address has to be between 0 & 65535; & most of these will refer to bytes in ROM or nowhere at all, & so have no effect. The value must be between -255 & +255, & if it is negative it gets 256 added to it.
The ability to poke gives you immense power over the computer if you know how to use it; however, the necessary knowledge is rather more than can be imparted in an introductory manual like this.
This is from the Sinclair ZX81 BASIC Programming, by Steven Vickers (Second Edition 1981).10 PRINT "ADDRESS";TAB 8;"BYTE"
20 FOR A = 0 TO 20
30 PRINT A;TAB 8;PEEK A
40 NEXT A
Type
POKE 17300,57
This makes the byte at address 17300 have the value 57. If you now type
PRINT PEEK 17300
you get your number 57 back. (Try poking in other values, to prove that there's no cheating.)
Note that the address has to be between 0 & 65535; & most of these will refer to bytes in ROM or nowhere at all, & so have no effect. The value must be between -255 & +255, & if it is negative it gets 256 added to it.
The ability to poke gives you immense power over the computer if you know how to use it; however, the necessary knowledge is rather more than can be imparted in an introductory manual like this.
Imagine a STM32 having such Basic interpreter, communicating using a Serial Console.
The other machine can be a Window PC executing the well known "Windows Terminal" program.
The interpreted Basic running inside the STM machine only needs to run the following features :
LINE NUMBERS
PRINT
TEXT in ASCII delimited by QUOTE
TEXT as variable in ASCII separated by SEMICOLON
INTEGER variable (32 bits) expressed in DECIMAL or in HEX
FOR ... TO ... NEXT
PRINT
TAB
PEEK
POKE
TEXT in ASCII delimited by QUOTE
TEXT as variable in ASCII separated by SEMICOLON
INTEGER variable (32 bits) expressed in DECIMAL or in HEX
FOR ... TO ... NEXT
TAB
PEEK
POKE
Do you see the beauty of implementing this on a STM32 microcontroller?
Now watch this : https://www.youtube.com/watch?v=SiVKGx_VU6c
And this : https://www.youtube.com/watch?v=fgVgvxaueiY
Regards,
Steph
Last edited:
Do you see the beauty of implementing this on a STM32 microcontroller?
Now watch this : https://www.youtube.com/watch?v=SiVKGx_VU6c
And this : https://www.youtube.com/watch?v=fgVgvxaueiY
Regards,
Steph
Well.... Basically, I think it is possible to do something more open.
ps: I know ZX81... I know old CPUs.... I made this serie of games when I worked on old school CPU when I was young 🙂
https://www.youtube.com/watch?v=iiFZyScbwJg
https://www.youtube.com/watch?v=Sai8ptjJN1s
https://www.youtube.com/watch?v=YOzrBGZaJkE
Why did BASIC use line numbers? Many times when working with BASIC, you were actually working in BASIC itself. In particular, a given string was either a line number and BASIC instructions, or a command to the basic interpreter to RUN or LIST. The use of line numbers made it easy to distinguish the code from the commands. All code starts with numbers. This was the beauty of the Sinclair ZX81 Basic and ZX Sinclair Spectrum Basic. I'm not talking about brute performance. I'm not talking about graphics capabilities. I'm talking about the easiness of exchanging raw information with a computer, including the possibility of getting it educated, for processing some raw information someway before actually presenting it to the Serial Console, in some ordered manner. Take some time thinking about this. Educate yourself. Resist the software bloat that's plaguing IT in general. Know the fundamentals, so you can recreate them.Imagine a STM32 having such Basic interpreter, communicating using a Serial Console.
A refreshing experience consists in reading the Sinclair ZX80 User Manual originally entitled "A COURSE IN BASIC PROGRAMMING" by Hugo Davenport.
Retro Isle - Sinclair ZX80 Original Documents
No client, no server, no compiler.
Instead, a Man working on a Serial Console, and a Machine having a Serial Console interface.
Man and Machine, speaking and interpreting a common language, which is BASIC in our case.
Now that we can rely on inexpensive 100 MHz microcontrollers featuring generous on-board Flash and RAM, we can try generalizing such method.
I don't say it helps executing audio DSP. I say it enables easy loading, changing and reading the parameters of the audio DSP.
Last edited:
I think a modern way is to have a HMI with a workflow like Sigmastudio.
It is a nice language as well and easiest to understand
It is a nice language as well and easiest to understand
Steph - I agree an interpreted language running on a Cortex M4 would be totally cool, I've had this idea in my head for quite some time. But I'd not want it to be particularly BASIC-like. One way to handle it would be to use NXP's dual-core offering as then the whole of the M4 core would be available to run DSP whilst the M0 would run the interpreter and user interface. The original dual core parts from NXP didn't have I2S but that's now changed with the LPC5411x range which have two I2S available. These parts are, as far as I know, available on LPCexpresso boards but I'm unclear whether the individual chips can be bought through distribution yet.
http://cache.nxp.com/documents/data_sheet/LPC5411X.pdf
http://cache.nxp.com/documents/data_sheet/LPC5411X.pdf
Last edited:
Let me guess. Being "more open" would consist of allowing the user to add new features, like adding new BASIC keywords. Is it what you mean?Well.... Basically, I think it is possible to do something more open.
I don't care the execution speed. I prefer the remarkable features of a dumb line-by-line BASIC interpreter, albeit slow. If the user-defined BASIC keywords (to be interpreted) rely on already existing BASIC keywords (also to be interpreted), having as effect a very slow execution speed, that's not an issue for me.
Later on, one may think about speeding up the execution, by adding a JIT compiler.
Later on, one may think about coupling some carefully hand-coded assembly, to some new BASIC keywords, we want to speed up to the max.
The graphics performance plays no role at this stage, as most of the time, the audio DSP applications I'm targeting don't rely on a GUI. Most of the time, it goes about sending new FIR filters coefficients, and sending new IIR Biquad filters coefficients. So, instead of sending a POKE instruction followed by a start address, followed by 20 to 200 32-bit coefficients, one may send a FIR_FILTER_LEFT instruction followed by the same a 20 to 200 32-bit coefficients, without worrying with the start address. Why ? Because the system would be able to register the start address, and associate to it a variable known as FIR_FILTER_LEFT. Just an example. The Serial Console wants a FIR filter somewhere in the DSP, and the STM32 deals with the housekeeping like determining the address of the coefficients in the STM32 Flash memory, and determining the address of the audio data storage in STM32 RAM. That's the beginning of some intelligence. Getting symbolic handles, would you say in Computer Science.
I like the idea of a graphical DSP compiler like Sigma Studio, DSP Concepts (from Paul Beckmann), or Teensy Audio Library (from Paul Stoffregen basing on Node-RED). At the end of the day you realize that the REAL INTELLIGENCE, is the way the Serial Console "language" gets organised for describing a kind of netlist interconnecting processing blocks having inputs and outputs. There is no real intelligence in a GUI. A GUI is a convenient way to add bells and whistles. The interpretation of the "language" that's in use through Serial Console is the REAL INTELLIGENCE of a system. I want such "language" to be interpreted BASIC, and from there, you'll see the evolution path, clear and painless. And still no GUI required. There may be a GUI, as bonus, and that's what Paul Stoffregen has done, structuring the Serial Console for making it compatible with the IBM Node-RED specification (and tweaking it for disallowing multiple flows converging to a same input port). There is an alternate specification you can base on, which is the LTspice netlist, no XML required there.
Last edited:
Symbolic Handles = pointers in plain language. The console not needing to know the pointer, only the symbol that's associated to it. Can be a NET_NAME or NET_NUMBER, part of a netlist.
As you can see, the interpreted BASIC would be only there for the setup, not for the audio DSP multiply-accumulate of course.
The audio DSP will thus remain precompiled payload to be installed at a certain address (code in STM32 Flash) dealing with a certain audio data (data in STM32 RAM), known as a NET_NAME or NET_NUMBER (pointer for the STM32), delivering filtered audio on another NET_NAME or NET_NUMBER (another pointer for the STM32).
Because of the native re-entrance support of a plain simple line-by-line interpreted BASIC that's initially dealing with the PEEK and POKE in a simple system, the same interpreted BASIC (of course equipped with the required specific extensions) can supervise the setup of an evolutive system, provided the netlist and input/output blocks of such evolutive system get properly described and structured. Such is my intuition. Do you follow me?
As you can see, the interpreted BASIC would be only there for the setup, not for the audio DSP multiply-accumulate of course.
The audio DSP will thus remain precompiled payload to be installed at a certain address (code in STM32 Flash) dealing with a certain audio data (data in STM32 RAM), known as a NET_NAME or NET_NUMBER (pointer for the STM32), delivering filtered audio on another NET_NAME or NET_NUMBER (another pointer for the STM32).
Because of the native re-entrance support of a plain simple line-by-line interpreted BASIC that's initially dealing with the PEEK and POKE in a simple system, the same interpreted BASIC (of course equipped with the required specific extensions) can supervise the setup of an evolutive system, provided the netlist and input/output blocks of such evolutive system get properly described and structured. Such is my intuition. Do you follow me?
Last edited:
Good news on the Async USB side: the proof of concept is now starting to work.
I can stream music from a Linux or Windows 10 machine to the Stm32F4 discovery without buffer over run or under runs.
I monitor the length of my buffer, and have 4 thresholds : LowLow, Low, High, HighHigh.
If I reach High, I ask for a bit less samples per frame. If I reach Low, I ask for a bit more samples per frame. It roughly goes from one Thresold to the other in 4s.
With this simple strategy, the LowLow and HighHigh thresholds are never hit, which means that there no buffer over run or under runs by a large margin.
Cool !!!
Now I have to set back the DSP feature, and solve my issue thqt it works with one channel, but not both. Bug somewhere...
JMF
JMF
I can stream music from a Linux or Windows 10 machine to the Stm32F4 discovery without buffer over run or under runs.
I monitor the length of my buffer, and have 4 thresholds : LowLow, Low, High, HighHigh.
If I reach High, I ask for a bit less samples per frame. If I reach Low, I ask for a bit more samples per frame. It roughly goes from one Thresold to the other in 4s.
With this simple strategy, the LowLow and HighHigh thresholds are never hit, which means that there no buffer over run or under runs by a large margin.
Cool !!!
Now I have to set back the DSP feature, and solve my issue thqt it works with one channel, but not both. Bug somewhere...
JMF
JMF
And I have now applied a cascade of 12 biquads to the stereo signal, without any issue.
Music is running now since few hours non stop and looks OK.
JMF
Music is running now since few hours non stop and looks OK.
JMF
It is possible for instance to use esperuino with a nice remote gui interface
LUA and python interfaces are available as well
LUA and python interfaces are available as well
Let's be practical. Consider the above 12 IIR Biquad filters in stereo. Is it feasible to implement a Serial Console on the STM32, enabling to edit all IIR Biquad filters coefficients while the USB async audio is running, and while audio DSP is running?It is possible for instance to use esperuino with a nice remote gui interface. LUA and python interfaces are available as well
About Javscript - Espruino
Here one can find Espruino like you said : Developing in JavaScript for STM32 JeeLabs.
I like their “BASIC, Reloaded” idea. But, isn't Espruino overkill and overcomplex? At the end of the journey, can the STM32 grab user-entered data from the Serial Console, and can the STM32 write into Flash memory, at the specific addresses that the audio DSP requires? Don't you think that the "abstraction search" obsession of modern languages, prevents them executing simple things like writing data into STM32 specific registers, or writing data into STM32 specific Flash addresses, or toggling some STM32 GPIO? The wiki presentation saying that Javascript got developed in ten days in May 1995 by Brendan Eich for Netscape for easing the creation of webpages, makes me wonder how one could exploit a STM32 Javascript in 2016 for interfacing with the real world, writing user-specified data in the STM32 Flash and RAM, or toggling some STM32 GPIOs. What's the footprint? Are there practical examples?
About Lua
Here one can find Lua like you said : Developing in Lua for STM32 JeeLabs.
The description of Lua makes me enthusiastic.
To JMF11 :
- where do you store the IIR Biquad coefficients? In Flash or in RAM?
- will you allow some external process like Lua, to edit the IIR Biquad coefficients?
To Camelator :
- how to add the Lua interpreter and Serial Console on the STM32 in such a way it doesn't ruin JMF11 software, which is realtime stuff ?
JMF11 could add a text section in the STM32 Flash memory, to be read by Lua and to be displayed on the Serial Console, when the user asks Lua to describe the environment, aka "process.env" in Javascript.
Say the user enters the ENV command.
Nua executes the ENV command.
The Lua Console would then reply :
Audio Signal Flow :
I2S1 left in -> IIR01 - IIR02 - IIR03 -> I2S1 left out -> DAC1 left out -> left speaker woofer
I2S1 left in -> IIR04 - IIR05 - IIR06 -> I2S1 right out -> DAC1 right out -> left speaker tweeter
I2S1 right in -> IIR07 - IIR08 - IIR09 -> I2S2 left out -> DAC2 left out -> right speaker woofer
I2S1 right in -> IIR10 - IIR11 - IIR12 -> I2S2 right out -> DAC2 right out -> right speaker tweeter
Coefficient sequence : a1, a2, b0, b1, b2 following the Digital_biquad_filter wikipedia definition
Coefficient format : 32-bit (details to be provided here by JMF11)
ENV_IIR_CLIST : address in STM32 Flash memory where to find the start of the list of the IIR Biquad filters coefficients.
ENV_IIR_CLIST : this symbols is known by Lua as 000034h
Lua Console environment is now ready !
Lua Console GPIO one second toggle example :
pio.pin.setdir(pi😵UTPUT, pio.PB_0)
repeat
pio.pin.setval(1-pio.pin.getval(pio.PB_0), pio.PB_0)
tmr.delay(nil, 1000000)
until false
Lua Console IIR01, IIR04, IIR12 filters re-definition examples:
@((env_iir_clist)+00).setval(0.1258, 1.98536, 0.0036, 0.0, -0.0036)
@((env_iir_clist)+25).setval(0.1258, 1.98536, 0.0036, 0.0, -0.0036)
@((env_iir_clist)+55).setval(0.1258, 1.98536, 0.0036, 0.0, -0.0036)
I2S1 left in -> IIR01 - IIR02 - IIR03 -> I2S1 left out -> DAC1 left out -> left speaker woofer
I2S1 left in -> IIR04 - IIR05 - IIR06 -> I2S1 right out -> DAC1 right out -> left speaker tweeter
I2S1 right in -> IIR07 - IIR08 - IIR09 -> I2S2 left out -> DAC2 left out -> right speaker woofer
I2S1 right in -> IIR10 - IIR11 - IIR12 -> I2S2 right out -> DAC2 right out -> right speaker tweeter
Coefficient sequence : a1, a2, b0, b1, b2 following the Digital_biquad_filter wikipedia definition
Coefficient format : 32-bit (details to be provided here by JMF11)
ENV_IIR_CLIST : address in STM32 Flash memory where to find the start of the list of the IIR Biquad filters coefficients.
ENV_IIR_CLIST : this symbols is known by Lua as 000034h
Lua Console environment is now ready !
Lua Console GPIO one second toggle example :
pio.pin.setdir(pi😵UTPUT, pio.PB_0)
repeat
pio.pin.setval(1-pio.pin.getval(pio.PB_0), pio.PB_0)
tmr.delay(nil, 1000000)
until false
Lua Console IIR01, IIR04, IIR12 filters re-definition examples:
@((env_iir_clist)+00).setval(0.1258, 1.98536, 0.0036, 0.0, -0.0036)
@((env_iir_clist)+25).setval(0.1258, 1.98536, 0.0036, 0.0, -0.0036)
@((env_iir_clist)+55).setval(0.1258, 1.98536, 0.0036, 0.0, -0.0036)
This may help beginners experimenting with JMF11 software.
Please note, I have not taken the time to input meaningful data what's regarding the IIR filters re-definition. They may be unstable.
I am including the "GPIO one second toggle" example for having a discussion :
- will it interfere and possibly block the audio DSP?
- how to kill the "GPIO one second toggle" once it got started?
Soon or later, one can write some user-friendly software for a PC, Tablet or Smartphone, computing the IIR Biquad filters coefficients in function of the user requirements. Such software to output the data, just as any user would type the data on the Serial Console. This can happen wireless using a BLE-to-Serial or Wifi-to-Serial adapter.
Last edited:
Currently the biquad coefficients are in Ram in a table, as defined per CMSIS spec.
The usage of the filters are in a 2 step: there is an init of the filter, then we can use it.
I imagine that modifying the coeff needs a reinit of the filter (to be confirmed)
As of now, my plan is to clean a bit the code (will still be prototyping level) and try to push it on Github. Then people could see the code, branch, and so on.
Limiting factor is that I don't know Github and eGit.
JMF
The usage of the filters are in a 2 step: there is an init of the filter, then we can use it.
I imagine that modifying the coeff needs a reinit of the filter (to be confirmed)
As of now, my plan is to clean a bit the code (will still be prototyping level) and try to push it on Github. Then people could see the code, branch, and so on.
Limiting factor is that I don't know Github and eGit.
JMF
Should be in Flash instead of RAM. What happens when power-cycling the board?Currently the biquad coefficients are in Ram in a table, as defined per CMSIS spec.
Self modifying code would be best as using it you can avoid the pipeline refill cycles (extra 1-3 cycles on M4), also one way to optimize the load and store instructions used with RAM tables is to pipeline them. DSP Concepts suggest calculating 4 output values at parallel and unrolling loops by factor of 4 (guess to optimize the pipeline), see http://www.dspconcepts.com/sites/default/files/white-papers/2011 AES - DSP vs Micro rev 2.pdf
Of course you want persist the coeffs somewehere (doesn't matter where, to flash or SD card or something) but it's better load them in RAM for faster processing or modify the code so that the coeffs are used as immediates in assembly instructions .
Of course you want persist the coeffs somewehere (doesn't matter where, to flash or SD card or something) but it's better load them in RAM for faster processing or modify the code so that the coeffs are used as immediates in assembly instructions .
- Home
- Source & Line
- Digital Line Level
- Can low jitter be achieved with STM32 microcontroller