101010.pl is one of the many independent Mastodon servers you can use to participate in the fediverse.
101010.pl czyli najstarszy polski serwer Mastodon. Posiadamy wpisy do 2048 znaków.

Server stats:

544
active users

#ai

440 posts317 participants24 posts today

When I hear about AI-based programming, I think back several decades to a time when I was dealing with a hairy set of data, and I wrote a pretty complex bit of code generating an even more complex bit of SQL. I don't remember now if it ended up proving useful or not, though I think it did. But that's not the point.

The point was when I came back to it after a few months ... I couldn't figure it out at all. Neither the generator, nor the generated code.

And I HAD WRITTEN IT. Myself, from scratch, sorting out what I wanted and how to get there.

There's a principle in programming that debugging and maintenance are far harder than coding. Which means you should never write code that you are too stupid to debug and maintain. Which is precisely what I'd failed in my anecdote.

And of course, Management, in its infinite wisdom, typically puts far greater emphasis on new development than on testing, or Heavens Forefend!!! maintenance. So all the brightest talent (or so perceived, at any rate) goes to New Development.

(There's a great essay from about a decade ago, "In Praise of Maintenance, which you, and by "you" I mean "I", should really (re)read: freakonomics.com/podcast/in-pr).

With AI-based code generation, presuming it works at all, we get code that's like computer-chess or computer-Go (the game, not the lang). It might work, but there's no explanation or clarity to it. Grandmasters are not only stumped but utterly dispirited because they can't grok the strategy.

I can't count the number of times I've heard AI referred to as search or solution without explanation, an idea I'd first twigged to in the late 2010s. That is, if scientific knowledge tells us about causes of things, AI ML GD LLM simply tells us the answer without being able to show its work. Or worse: even if it could show work, that wouldn't tell us anything meaningful.

(This ... may not be entirely accurate, I'm not working in the field. But the point's been iterated enough times from enough different people at least some of whom should know that I tend to believe it.)

A major cause of technical debt is loss of institutional knowledge over how code works and what parts do what. I've worked enough maintenance jobs that I've seen this in all size and manner of organisations. At another gig, I'd cut the amount of code roughly in half just so I could run it in the interactive environment which made debugging more viable. I never really fully understood what all of that program did (though I could fix bugs, make changes, and even anticipate some problems which later emerged). Funny thing was when one of the prior Hired Guns who'd worked on the same project before my time there turned up on my front door some years later ... big laughs from both of us...

But this AI-generated code? It's going to be hairballs on hairballs on hairballs. And at some point it's gonna break.

Which leaves us with two possible situations:

  • We won't have an AI smart enough to deal with the mess.
  • Or, maybe, we will. Which as I think of the possibility whilst typing this seems potentially even more frightening.

Though my bet's on the first case.

FreakonomicsIn Praise of Maintenance (Replay) - FreakonomicsIn Praise of Maintenance (Replay) - Freakonomics

What's a bit concerning about various big #AI models is that, to be more relaxed within given limits, you need to pay for many models. #Claude is far better for code but its interface is less accessible and less intuitive; #ChatGPT knows a lot about me and I really like to brainstorm with her (well, for me it's a "her", for you it might be otherwise) or ask everyday questions, but with code… not so great as Claude; also there are specialized models like v0.dev or #Suno for music. I don't pay to anyone now, but I feel, if I had spare money, I would think about paying several of them.
Or train my own local model(s) for my needs, which I'm also thinking of.

BBC 用 AI 換臉製作小說寫作課程 已故小說家 Agatha Christie 重現幕前
AI「換臉」技術向來受到爭議,不過最近 BBC 就用了類似技術,重現已故偵探小說作家 Agatha Christie 的聲音和形象,用於數碼課程教授有志作家「如何創作完美犯罪小說」。
The post BBC 用 AI 換臉製作小說寫作課程 已故小說家 Agatha Christie 重現幕前 appeared first on 香港 unwire.hk 玩生活.樂科技.
#人工智能 #AI #BBC #Deepfake
unwire.hk/2025/05/04/bbc-agath

Continued thread

Brave open sources Cookiecrumbler, an AI-powered tool to automatically detect and block cookie consent banners, currently running on Brave servers, with plans to move it to the browser:
betanews.com/2025/04/27/brave-

Kdenlive 25.04 released with AI-powered background removal plugin, OpenTimelineIO import/export, refactored audio thumbnail system, ability to change the duration of multiple adjacent clips in a single action and more:
alternativeto.net/news/2025/4/
(That background removal tool is actually a pretty good use of AI, and it runs locally, so no need to give Kdenlive Internet access.)
(The ability to change duration of multiple clips at once is also pretty useful, I might attempt to update Kdenlive to get that feature, and hope it doesn't break lol.)

Joplin 3.3 released with various accessibility improvements (keyboard navigation, screen reader support, higher contrast UI elements etc.), option to collapse or expand all notebook hierarchies with a single button, run multiple independent instances for better workspace management on desktop, new search dialog for quick linking of notes, Markdown auto-replacement in Rich Text Editor, improved focus handling in modals on mobile, support for attaching audio recordings to notes and enhanced voice typing on Android, redesigned "New Note" menu:
alternativeto.net/news/2025/4/

AdGuard's CLI adblocker for Linux reached version 1.0, includes app exclusion feature, differential filter updating, interactive setup wizard:
betanews.com/2025/04/29/how-to

OpenBSD 7.7 released with performance boosts, expanded hardware support, support for Scalable Vector Extension (SVE) and enabling PAC on hardware with the new QARMA3 cipher, improved support for running the system in QEMU, kernel improvements etc.:
alternativeto.net/news/2025/4/

(more FOSS news in comment)

BetaNews · Brave open sources Cookiecrumbler to make cookie consent blocking smarterBrave just made a move that should make privacy enthusiasts pretty happy. The company has officially open sourced Cookiecrumbler, a tool designed to automatically detect and help block those obnoxious cookie consent banners you see across the Web. These pop-ups are not only annoying but, according to research, often track users even when they click reject. Cookiecrumbler aims to stop that nonsense while avoiding the headaches that can come with sloppy blocking rules.
#WeeklyNews#News#FOSS
Replied in thread

@davidgerard I've got to take exception to one statement you make:

AI is the last game in the casino.

It may be the last game of which we're presently aware. But I'm reasonably certain another grift will come along.

That's a minor nit, and this is good analysis.

NB: WNYC's On the Media interviewed Ed Zitron this past January. He makes a similar case to yours:

Silicon Valley over the years has leaned towards just growth ideas. What will grow, what can we sell more of? Except, they've chased out all the real innovators. To your original question, they didn't know what they were going to do. They thought that ChatGPT would magically become profitable. When that didn't work, they went, "Well, what if we made it more powerful and bigger? We can get more funding that way," so they did that.

wnycstudios.org/podcasts/otm/a (transcript available).

@peter

WNYC StudiosBrooke Talks AI With Ed Zitron | On the Media | WNYC StudiosHow DeepSeek threatens to burst the American tech AI bubble. 

Out of curiosity I switched on „Apple Intelligence“ and had my email list sorted and categorized. Boy, is that bad! And you can’t even teach it manually easily (if at all) … 🙄

Why does commercial AI always confirm all these prejudices 🤷🏼‍♂️?
#Apple #AI

#Google #SearchEngines #AI #misinformation

"For over a month now, Google has been spreading lies about us. The text below was created by their generative AI tools and inserted into the first page search results for various searches for 'Clarkesworld' originating in the US.... Numerous people have submitted complaints on our behalf, including some Google employees, but this result continues to display."

neil-clarke.com/google-is-stil

neil-clarke.comGoogle is still at it – Neil Clarke