101010.pl is one of the many independent Mastodon servers you can use to participate in the fediverse.
101010.pl czyli najstarszy polski serwer Mastodon. Posiadamy wpisy do 2048 znaków.

Server stats:

492
active users

#openweight

0 posts0 participants0 posts today
Chi Kim<p>😲 DeepSeek-V3-4bit runs at &gt;20 tokens per second and &lt;200W using MLX on an M3 Ultra with 512GB. This might be the best and most user-friendly way to run DeepSeek-V3 on consumer hardware, possibly the most affordable too. You can finally run a GPT-4o level model locally, with possibly even better quality. <a href="https://mastodon.social/tags/LLM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLM</span></a> <a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://mastodon.social/tags/ML" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ML</span></a> <a href="https://mastodon.social/tags/DeepSeek" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DeepSeek</span></a> <a href="https://mastodon.social/tags/OpenAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenAI</span></a> <a href="https://mastodon.social/tags/GPT" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPT</span></a> <a href="https://mastodon.social/tags/OpenWeight" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenWeight</span></a> <a href="https://mastodon.social/tags/OpenSource" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenSource</span></a> <a href="https://venturebeat.com/ai/deepseek-v3-now-runs-at-20-tokens-per-second-on-mac-studio-and-thats-a-nightmare-for-openai/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">venturebeat.com/ai/deepseek-v3</span><span class="invisible">-now-runs-at-20-tokens-per-second-on-mac-studio-and-thats-a-nightmare-for-openai/</span></a></p>
Stefano Zacchiroli<p>This week I'm in <a href="https://mastodon.xyz/tags/Montreal" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Montreal</span></a> to present at <a href="https://mastodon.xyz/tags/SANER2025" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SANER2025</span></a> recent joint work with A. Gurioli and M. Gabrielli, on how to recognize AI-generated code. Key differences in our results wrt the state-of-the-art are: (1) a multilingual approach that supports 10 different programming languages, and (2) a fully <a href="https://mastodon.xyz/tags/reproducible" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>reproducible</span></a> pipeline based on both <a href="https://mastodon.xyz/tags/OpenWeight" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenWeight</span></a> and <a href="https://mastodon.xyz/tags/OpenData" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenData</span></a> self-hosted LLMs. Talk info and <a href="https://mastodon.xyz/tags/OpenAccess" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenAccess</span></a> preprint at: <a href="https://conf.researchr.org/details/saner-2025/saner-2025-papers/36/Is-This-You-LLM-Recognizing-AI-written-Programs-with-Multilingual-Code-Stylometry" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">conf.researchr.org/details/san</span><span class="invisible">er-2025/saner-2025-papers/36/Is-This-You-LLM-Recognizing-AI-written-Programs-with-Multilingual-Code-Stylometry</span></a></p>
Prem Kumar Aparanji 👶🤖🐘<p><span class="h-card" translate="no"><a href="https://mastodon.de/@katzenberger" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>katzenberger</span></a></span> <span class="h-card" translate="no"><a href="https://mastodon.social/@silentexception" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>silentexception</span></a></span> </p><p>So, from that perspective, it is important that:<br>1. We find the appropriate job/task for which the <a href="https://mastodon.social/tags/LLM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLM</span></a> is indispensible, and not use it for everything<br>2. We try to reuse the LLM as much as possible rather than training again and again</p><p><a href="https://mastodon.social/tags/opensource" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>opensource</span></a> <a href="https://mastodon.social/tags/openweight" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>openweight</span></a> <a href="https://mastodon.social/tags/ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ai</span></a> <a href="https://mastodon.social/tags/jtbd" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>jtbd</span></a> <a href="https://mastodon.social/tags/economics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>economics</span></a> <a href="https://mastodon.social/tags/foss" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>foss</span></a> <a href="https://mastodon.social/tags/genai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>genai</span></a></p>
Prem Kumar Aparanji 👶🤖🐘<p><span class="h-card" translate="no"><a href="https://mastodon.social/@silentexception" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>silentexception</span></a></span> ha ha! Yeah ... Well, it's a perennial cycle ... Isn't it.</p><p>Wait till the large lying models cause enough troubles for enterprises and various Sovereign AI.</p><p><a href="https://mastodon.social/tags/llm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>llm</span></a> <a href="https://mastodon.social/tags/llms" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>llms</span></a> <a href="https://mastodon.social/tags/GenAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GenAI</span></a> <a href="https://mastodon.social/tags/opensource" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>opensource</span></a> <a href="https://mastodon.social/tags/openweight" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>openweight</span></a></p>
Prem Kumar Aparanji 👶🤖🐘<p><span class="h-card" translate="no"><a href="https://witter.cz/@fredbrooker" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>fredbrooker</span></a></span> not even the 3 trillion tokens worth of data from dolma?</p><p>Or, the more recent, 2 trillion tokens in common corpus?</p><p><a href="https://huggingface.co/blog/Pclanglais/two-trillion-tokens-open" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">huggingface.co/blog/Pclanglais</span><span class="invisible">/two-trillion-tokens-open</span></a></p><p><a href="https://mastodon.social/tags/opensource" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>opensource</span></a> <a href="https://mastodon.social/tags/ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ai</span></a> <a href="https://mastodon.social/tags/genai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>genai</span></a> <a href="https://mastodon.social/tags/llm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>llm</span></a> <a href="https://mastodon.social/tags/llms" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>llms</span></a> <a href="https://mastodon.social/tags/openweight" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>openweight</span></a> <a href="https://mastodon.social/tags/openweights" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>openweights</span></a></p>