No Server, No Database: Smarter Related Posts in Astro With transformers.js, by @alexvue.bsky.social:
https://alexop.dev/posts/semantic-related-posts-astro-transformersjs/
You know, I think recent versions of iOS do lots of (fairly good, apparently) OCR on the fly when handling images, so I think people are unlearning the difference between images and text.
Oh, woe. XD XD XD
FOSS Android only has very poor/limited (tesseract-based, probably) OCR options. Maybe we'll get some "AI"-based OCR options in the near future, like how woheller69 has added a bunch of #huggingface-based AI apps to F-Droid, like a speech to text app and a birdsong recognizer.
DeepSeek Debuts Upgrade to AI Model That Improves Reasoning https://www.byteseu.com/863743/ #AI #ArtificialIntelligence #DeepSeek #GenAI #HuggingFace #Innovation #News #PYMNTSNews #Technology #What'sHot
#Alibaba releases #OpenSource reasoning model QwQ-32B on #HuggingFace and #ModelScope, claiming comparable performance to #DeepSeek R1 but with lower #compute needs
Alibaba releases the open-source QwQ-32B reasoning model on Hugging Face & ModelScope, claiming comparable performance to DeepSeek-R1 with lower compute needs. #AI #MachineLearning #OpenSource #Alibaba #TechNews #ReasoningModel #DeepLearning #HuggingFace
RT @lewoniewski: #HuggingFace dataset with quality assessment of 47 million #Wikipedia articles in 55 languages
https://t.co/5xu6CQH7wT ht…
via https://twitter.com/WikiResearch/status/1897411647297478782
New Release! A Hands-On Guide to Fine-Tuning Large Language Models with PyTorch and Hugging Face by Daniel Voigt Godoy
A practical guide to fine-tuning Large Language Models (LLMs), offering both a high-level overview and detailed instructions on how to train these models for specific tasks.Find it on Leanpub!
"The #opensource clone is already racking up comparable benchmark results. After only a day's work, #HuggingFace's Open Deep Research has reached 55.15% accuracy on the General AI Assistants (GAIA) benchmark, which tests an #AI model's ability to gather & synthesize information from multiple sources. #OpenAI's #DeepResearch scored 67.36% accuracy on the same benchmark." http://arstechnica.com/ai/2025/02/after-24-hour-hackathon-hugging-faces-ai-research-agent-nearly-matches-openais-solution
Nothing planned today? Try an open-source LLM model on Hugging Face. If the model is small enough, you can try it out without expensive infrastructure:
What do you need for it? Hugging Face account
Open source LLM model (e.g. IBM Granite models)
Hugging Face Space with a Secret for the API-Token
2 files: requirements.txt, app.py
Note: Smaller models (~1B parameters) are currently less accurate for complex tasks, but the trend is shifting towards more efficient, high-performing small models.
Here is the complete step-by-step guide: https://medium.com/data-science-collective/how-to-try-out-an-open-source-ai-model-without-costly-hardware-and-what-to-know-about-ibm-84c00f778ac6#f8f2
Friend-Link, if you don't have the paid Medium Version: https://medium.com/data-science-collective/how-to-try-out-an-open-source-ai-model-without-costly-hardware-and-what-to-know-about-ibm-84c00f778ac6?sk=85c4845a4cf787592948d256653670e5