@tedyapo @poleguy @attie @rrmutt (This project is on my travel laptop, so I'm only working on it sporadically. D'oh.)
I did the CIC thing. I created samples where pulse present = 1 and absent = 0. Integrated, decimated, combed. The problem is that that's a DC signal. Integrators overflow on DC. I want to measure the DC bias (pulse density), so I can't just decay the integrators.
Suggestions welcome. (cont'd)
12/N
ADC25 Call for Speakers Deadline Extended to July 6
There’s been a last-minute rush of registrations to the Talk Submissions Portal. We don’t want you to rush your proposals, so we’ve extended the deadline to July 6th.
audio.dev
Join Us In-person (Bristol UK) & Online 10-12 Nov
Submit your talk idea for this year's conference:
https://audio.dev/call-for-speakers/
Talks can be in-person (Bristol UK) or online. Only the talk title & abstract are needed to propose a talk.
Join Us 10-12 Nov 2025
I can recommend this talk. https://media.ccc.de/v/gpn23-302-sound-chip-whisper-me-your-secrets- lots of #retrocomputing and #vintagecomputing in the #dsp audio generation world
I am happy to announce the release of a longwave software defined radio which I designed at work for experiments with #DSP algorithms, running on the #ULX3S #FPGA board. The user interface is based on #Mecrisp #Forth running on the #FemtoRV, and the signal chain contains a pipelined FFT designed by Dan Gisselquist. Many thanks to Ulixxe for their USB-CDC implementation!
https://github.com/mb-sat/ulx3s-longwave-sdr
https://codeberg.org/Mecrisp/ulx3s-longwave-sdr
can't believe this thing actually works...
There is a markov chain in there somewhere and like 1 billion modulo calls per step.
Sometimes, #deconvolution is used by #CS folks to mitigate noise and distortion in an image, provided the characteristic function of the interference source can be measured (or modelled).
I wonder if #radar #EE folks have tried deconvolving the reflected signal with a measured (or modelled) topography of the operating area, so as to cure the ills caused by the ground clutter.
Today, some IT coders contend that modern software, like MATLAB, Python, Julia, etc., make an ordinary IT coder a DSP expert, especially when assisted by AI code generators. Others believe DSP is just an API comprising a handful of functions. Some go so far as to claim that modern AI has rendered #DSP—that which #EEs have practised for almost 70 years—all but obsolete.
I do wonder, though, if any of these code cutters has ever looked at a typical #DSP library and say, "Yeah, piece of cake; I got this":
• MATLAB DSP Toolbox
https://www.mathworks.com/help/signal/referencelist.html?type=function&s_tid=CRUX_topnav
• TMS320 DSP
https://www.ti.com/lit/ug/spru422j/spru422j.pdf?ts=1749352189606
• STM32 CMSIS-DSP
https://arm-software.github.io/CMSIS_5/DSP/html/index.html
• ESP32 DSP
https://docs.espressif.com/projects/esp-dsp/en/latest/esp32/esp-dsp-apis.html
Today I'm discussing how algorithmic reverbs work using the popular “Freeverb.” I give details on feedforward/feedback delays and allpass filters, and I include a Max/MSP patch to play with.
Hope it's helpful/interesting to someone!
#audio #LiveCoding #syntax #zine #OpenCall
> Cyberflemme and the Cookie Collective join forces to create a collaborative fanzine dedicated to the syntax of audio livecoding.
> A simple track has been composed.
> Your mission: reproduce it using your favorite livecoding software.
deadline got extended for a week (until 9th June), so I submitted an attempt with #Clive #C #DSP , which ended up a little over 100 lines of code.
I think I'm a code-jay
I like sitting at home jamming away, spending time tweaking things, able to think straight.
From-scratch is great and I'll always attempt that on occasion, but I'm happy to regurgitate code from recent sessions and not really write much on stage.
I love performance, but for now there is still too much adrenaline to be able to function clearly and keep track of what I'm doing.
Maybe these two things will meet in the middle and also I've very much against the idea of 'polished' code.
For me this is analogous to how I would write music for guitar and play it back. Or program an Akai MPC and mix the loops live with dubsiren.
I guess that would be live-arranging?
Interestingly I made Syntə with the intention of it being a 'live only' platform, I love the idea of that. I just find it hard to do code when I'm excited or with various levels of exhaustion.
The ADC25 Talk Submissions Portal is Now Open!
Deadline for submitting your ADC25 talk proposal is June 29th! Accepted talks can be presented in-person (Bristol UK) or online.
Only a talk title and abstract are needed for talk submissions.
For more information:
https://audio.dev/call-for-speakers/
Some good #books on #DSP for #engineering undergrads:
for #EEs
• Digital Signal Processing, Oppenheim—the Bible
• Understanding Digital Signal Processing, Lyons—accessible, comprehensive, and practical
• Introduction to Digital Signal Processing, Kuc—dated but good
• Digital Filters, Hamming—a classic
• Wavelets and Filter Banks, Strang—straight from the Master
• Digital Signal Processing using ARM Cortex-M Microcontrollers, Ünsalan—STM32F4 bare-metal C programming using CMSIS-DSP library
for non-EEs
• The World According to Wavelets, Hubbard—easy overview by a non-STEM writer who interviewed all the big names in wavelet research
• The Scientist and Engineer's Guide to Digital Signal Processing, Smith—the gentlest
Bottom line: the more broadly read the student is, the more accessible the subject becomes.
Btw. Here's another 6½min live performance from my talk @ #Resonate 2016. Alas, there were some cable issues on stage and I had to record it all afterwards in my hotel room. The setup was the same as in the first video from the previous post (i.e. STM32F746 DISCO + Korg Nanokontrol)
https://soundcloud.com/forthcharlie/stm32f7-live-recording-resonate
Just realised the readme for https://thi.ng/synstack (a C11 & Forth-based softsynth engine for #STM32) had broken links to two live performances/demos I creeated in early 2016. I've now fixed those and re-uploaded these videos to my Makertube/Peertube:
STM32F746 MIDI synth & Korg Nanokontrol (live recording, 2016-01-31)
https://makertube.net/w/6tYcSLrJdPfev8HNFNHVPj
STM32F746 synth GUI (live recording, 2016-01-28)
https://makertube.net/w/mbeSF3y2rs2xnx1Yv5fL5v
Okay #math and #dsp folks. Is there a noise function that I can initialize to generate a mode between to values?
Say I have a function f(x) which generates either 1 or 0 each time I call it. I call it N times, and sum the ones and zeros and then divide by N, I want the result to be "P" (which is a number 0 <= P <= 1)
A uniform distribution gets me 0.5, I want a distribution that centers around P.
Thoughts? Plan of attack? References?
Guess what, @7nihilate just released this whole-ass #Rust #DSP that's meant for cleaning voice audio. Its specialty is making your voice recordings clear enough that they can sound good even if you highly compress them... but it's also a step to making better and more free hearing aid tech.
https://crates.io/crates/rustic_audio_tool
There's a GUI here:
https://github.com/brettpreston/Rustic_Audio
Crowdsourcing #DSP creativity!
You could help @gnuradio 's block documentation wiki a lot if you could try out our instructions for how to add example flowgraphs to Block Doc pages!
https://wiki.gnuradio.org/index.php?title=How_to_Add_an_Example_Flowgraph_to_a_Block_Doc
The documentation team (and countless contributors) have created a host of 555 block documentation pages to date. Not every single one of these has an example flowgraph that really illustrates the useful things you could do with that block.
Could you help?
#gnuradio #sdr #docs #docstodon