Unlike other #Fediverse servers, we didn't need to "wait and see" before preventing #Meta from using our community's content to train their #LLMs. When corporations show you who they are, believe them.
https://www.dropsitenews.com/p/meta-facebook-tech-copyright-privacy-whistleblower
Geoint-R1: Formalizing multimodal geometric reasoning with dynamic auxiliary constructions. ~ Jingxuan Wei et als. https://arxiv.org/abs/2508.03173v1 #ITP #LeanProver #LLMs
**Meet President Willian H. Brusen from the great state of Onegon**
"_LLMs still struggle with accurate text within graphics_"
https://www.theregister.com/2025/08/08/gpt-5-fake-presidents-states/.
Sometimes humans are just too stupid and in those cases no chatbot in the world can help you... :-D
"A man gave himself bromism, a psychiatric disorder that has not been common for many decades, after asking ChatGPT for advice and accidentally poisoning himself, according to a case study published this week in the Annals of Internal Medicine.
In this case, a man showed up in an ER experiencing auditory and visual hallucinations and claiming that his neighbor was poisoning him. After attempting to escape and being treated for dehydration with fluids and electrolytes, the study reports, he was able to explain that he had put himself on a super-restrictive diet in which he attempted to completely eliminate salt. He had been replacing all the salt in his food with sodium bromide, a controlled substance that is often used as a dog anticonvulsant.
He said that this was based on information gathered from ChatGPT.
“After reading about the negative effects that sodium chloride, or table salt, has on one's health, he was surprised that he could only find literature related to reducing sodium from one's diet. Inspired by his history of studying nutrition in college, he decided to conduct a personal experiment to eliminate chloride from his diet,” the case study reads. “For 3 months, he had replaced sodium chloride with sodium bromide obtained from the internet after consultation with ChatGPT, in which he had read that chloride can be swapped with bromide, though likely for other purposes, such as cleaning.”"
Is #chainofthought #Reasoning of #LLMs a Mirage?
"... Our results reveal that #CoT reasoning is a brittle mirage that vanishes when it is pushed beyond training distributions. This work offers a deeper understanding of why and when CoT reasoning fails, emphasizing the ongoing challenge of achieving genuine and generalizable reasoning.
... Our findings reveal that CoT reasoning works effectively when applied to in-distribution or near
in-distribution data but becomes fragile and prone to failure even under moderate distribution shifts.
In some cases, LLMs generate fluent yet logically inconsistent reasoning steps. The results suggest that what appears to be structured reasoning can be a mirage, emerging from memorized or interpolated patterns in the training data rather than logical inference.
... Together, these findings suggest that LLMs are not principled reasoners but rather sophisticated simulators of reasoning-like text."
@arstechnica Apple ‘Intelligence’ is one of the worst #LLMs I have seen. They still mark it BETA. No sane developer releases beta software in a MAJOR release. And after a while it’s just an excuse for releasing rubbish. Of course the beta status seems now to be only in fine print which makes it worse. People who don’t know any better rely on this. The #AI craze is a giant #Scam. Adding the letters AI doesn’t make it AI and it doesn’t make it better.
Readings shared August 8, 2025. https://jaalonso.github.io/vestigium/posts/2025/08/09-readings_shared_08-08-25 #FunctionalProgramming #Haskell #ITP #LLMs #LeanProver #Logic #Math #Reasoning
"If you don't let me continue massively stealing I'll go bankrupt!" https://arstechnica.com/tech-policy/2025/08/ai-industry-horrified-to-face-largest-copyright-class-action-ever-certified/
#LLM #LLMs #GenAI
"A man gave himself bromism, a psychiatric disorder that has not been common for many decades, after asking ChatGPT for advice and accidentally poisoning himself, according to a case study published this week in the Annals of Internal Medicine. "
Scrutinizing LLM reasoning models (Researchers are seeking to learn the steps behind an LLM's inference process). ~ Emma Stamm. https://cacm.acm.org/news/scrutinizing-llm-reasoning-models/ #LLMs #Reasoning
Readings shared August 7, 2025. https://jaalonso.github.io/vestigium/posts/2025/08/08-readings_shared_08-07-25 #AI #ITP #LLMs #LeanProver #Math
#OpenAI's 'Jailbreak-Proof' New Models? Hacked on Day One
Hours after releasing its first open-weight models in years with claims of robust safety measures, OpenAI's GPT-OSS has been cracked by notorious AI jailbreaker Pliny the Liberator
https://decrypt.co/333858/openai-jailbreak-proof-new-models-hacked