Want to compress a video to a specific file size? Constrict is a new Linux tool built in Python and GTK4, and powered by FFmpeg that can do it.
https://www.omgubuntu.co.uk/2025/07/constrict-linux-video-compressor-ffmpeg-gui-ubuntu
Want to compress a video to a specific file size? Constrict is a new Linux tool built in Python and GTK4, and powered by FFmpeg that can do it.
https://www.omgubuntu.co.uk/2025/07/constrict-linux-video-compressor-ffmpeg-gui-ubuntu
Ok, any #video folks out there who know how to do what I want to do? I don't know what words to search for because I don't know what this technique is called. Boosts welcome, suggestions welcome.
I have a pool cleaning robot. Like a roomba, but for the bottom of the pool. We call it poomba. Anyways, I want to shoot an MP4 video with a stationary camera (a GoPro) looking down on the pool while the robot does its work. So I will have this overhead video of like 3-4 hours.
I want to kinda overlay all the frames of the video into a single picture. So the areas where the robot drove will be dark streaks (the robot is black and purple). And any area the robot didn't cover would show the white pool bottom. Areas the robot went over a lot would be darker. Areas it went rarely would be lighter.
I'm just super curious how much coverage I actually get. This thing isn't a roomba. It has no map and it definitely doesn't have an internet connection at the bottom of the pool. (Finally! A place they can't get AI, yet!) It's just using lidar, motion sensors, attitude sensors and some kind of randomizing algorithm.
I think of it like taking every frame of the video and compositing it down with like 0.001 transparency. By the end of the video the things that never changed (the pool itself) would be full brightness and clear. While the robot's paths would be faint, except where it repeated a lot, which would be darker.
I could probably rip it into individual frames using #ffmpeg and then do this compositing with #ImageMagick or something (I'm doing this on #Linux). But 24fps x 3600 seconds/hour x 3 hours == about 260K frames. My laptop will take ages to brute force this. Any more clever ways to do it?
If I knew what this technique/process was called, I'd search for it.
can we manage per-package compile-from-source configure options with apt? #apt #compiling #ffmpeg #configure
A developer managed to reverse pixelation in video using FFmpeg, GIMP and edge detection - no AI involved.
By analyzing motion and edges across frames, they could reconstruct original content from blurred areas.
It’s a reminder: pixelation is visual, not secure.
Code & demo: https://github.com/KoKuToru/de-pixelate_gaV-O6NPWrI
@WeirdWriter
Must do a search on how to use #FFmpeg with Linux!
Finally got #FFmpeg working as a fully functional screen recorder, and podcast recorder too! No more downloading third party tools that can all be done in FFmpeg. Now maybe I can make PeerTube videos easier now.
awesome!
In the long run it might also make the online transcoding tools with the "Start now" [to download malware]-button obsolete.
Funnily I talked about this [soon solved] problem with my funder @clemensg by phone today.
#transcoding #encoding #video #browser #ffmpeg #webassembly #clientside #videoconverter
just et. al. too
please save us from uploading duplicate files or journalists from writing alt twice with clientside content-id comparison.
Just a thought, from a knuckle-dragging biology scientist. TL;DR: I believe there is scope to make the hosting of a peertube instance even more lightweight in the future.
I read some time ago of people using #webAssembly to transcode video in a user's web-browser. https://blog.scottlogic.com/2020/11/23/ffmpeg-webassembly.html
Since then, I believe #WebGPU has done/is doing some clever things to improve the browser's access to the device's GPU.
I have not seen any #peertube capability that offloads video transcoding to the user in this way.
I imagine, though, that this would align well with peertube's agenda of lowering the bar to entry into web-video hosting, so I cannot help but think that this will come in time.
My own interest is seeing a #Piefed (activitypub) instance whose web-pages could #autotranslate posts into the user's own language using the user's own processing power... One day, maybe!
Thank you again for all your hard work; it is an inspiration.
Re: https://okla.social/@johnmoyer/114738149453494692
https://www.rsok.com/~jrm/2025May31_birds_and_cats/video_IMG_3666c_2.mp4
ffmpeg -loop 1 -i IMG_3666cs_gimp.JPG -y -filter_complex "[0]scale=1200:-2,setsar=1:1[out];[out]crop=1200:800[out];[out]scale=8000:-1,zoompan=z='zoom+0.005':x=iw/3.125-(iw/zoom/2):y=ih/2.4-(ih/zoom/2):d=3000:s=1200x800:fps=30[out]" -vcodec libx265 -map "[out]" -map 0:a? -pix_fmt yuv420p -r 30 -t 30 video_IMG_3666c_2.mp4
Small thing I noticed today: splitting `.mkv` files using `ffmpeg` via the `-ss` and `-t` options works great, but the resulting `.mkv` file contains the wrong number of frames in the metadata. Evidently the total frames from the source file gets written into the metadata instead of the resulting frames post-splitting which is annoying. Not a deal breaker, but kind of annoying...easy to fix with a quick run through `mkvmerge` but still just weird.
My first thought was: if #adverts are louder or more compressed or have background music when the rest of the podcast hasn't, you might be able to spot them in #Audacity just by zooming out and looking at sound levels.
My second thought was: if that can be done by eye then maybe it could be automated. The process would be:
Use #ffmpeg to convert the podcast to a raw stream of sound samples in a form that's trivial to read in your favourite programming language. Maybe you'd mix down to mono to make subsequent processing even easier. Maybe you'd reduce the sample rate to save processing cycles.
Write a bit of code to detect higher volume or more compressed audio (or whatever) and generate a second ffmpeg command to trim out the adverts, leaving only the programme.
Run that command.
Throw away the temporary file. (Or use a pipe in steps 1 and 2 so that the temporary file doesn't exist and you can run multiple programs in parallel.)
But this does all depend on how obnoxiously the adverts have been produced. If they're sonically similar to the programme then this obviously won't work.
Отвечаю сам себе: да, разница есть
Перегнал трэк (снова #BeachBoys, да) с 16-бит #hdcd в 24-бита с помощью #ffmpeg, разница слышна даже на очень хуёвых динамиках ноутбука. По всей видимости, используется какой-то компрессор
My "couldn't let it go" #ADHD project this week has been optimizing my faux #CUSeeMe webcam view for my #twitch streams and then making it more accurately emulate the original #ConnectixQuickCam:
https://github.com/morgant/CU-SeeMe-OpenBSD
Another project that has helped me better understand #ffmpeg & #lavfi, #mpv, and how to create complex filters.
Ladies and gentlemen, it's finally here...
**ttv** — Play videos directly in the terminal!
It works™ with a naive implementation.
Written in Rust & built with @ratatui_rs
GitHub: https://github.com/nik-rev/ttv
Usually, I'm quite happy with #Linux:
It just works, and I don't have to think about it at all.
But sometimes, I reckon: If it sucks, it sucks badly.
I use the command line video editor #ffmpeg quite a lot, but yesterday, it just stopped working, first producing wrong results and then weird error messages.
Internet search for the error messages gave lots of results from the 2010s and Reddit threads where the issue remained unresolved.
"OK", I thought, "Let's do what everyone does":
1/3