101010.pl is one of the many independent Mastodon servers you can use to participate in the fediverse.
101010.pl czyli najstarszy polski serwer Mastodon. Posiadamy wpisy do 2048 znaków.

Server stats:

484
active users

#mlx

0 posts0 participants0 posts today
Pepijn Bruienne<p>Cobbled together an <a href="https://infosec.exchange/tags/ExoLabs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ExoLabs</span></a> cluster to fuck around with <a href="https://infosec.exchange/tags/devstral" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>devstral</span></a> a bit, since it's kinda too big for my M3 Max daily driver. While in the process of bringing up nodes the model hit a bug in the <a href="https://infosec.exchange/tags/MLX" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MLX</span></a> <a href="https://infosec.exchange/tags/Python" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Python</span></a> module that deals with inference model sharding related to passing around MLX vs Numpy data structures.</p><p>For shits and giggles and also not being a top-tier <a href="https://infosec.exchange/tags/Numpy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Numpy</span></a> data structure debugging guy I asked Devstral to look at the bug and figure out a fix. After one wrong turn it came up with a fix which I applied to the other nodes and now it's happily sharding the bigger Devstral models. Not sure about vibe coding as a social contagion but from a “How close are we to <a href="https://infosec.exchange/tags/Skynet" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Skynet</span></a>”-perspective I think we're cooked, chat.</p><p>Anyway enjoy your Memorial Day weekend 🎉</p><p>Figure 1. A very heterogeneous Exo cluster.</p>
B166IR<p><a href="https://youtu.be/J4qwuCXyAcU" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">youtu.be/J4qwuCXyAcU</span><span class="invisible"></span></a></p><p>In this video, Ollama vs. LM Studio (GGUF), showing that their performance is quite similar, with LM Studio’s tok/sec output used for consistent benchmarking.</p><p>What’s even more impressive? The Mac Studio M3 Ultra pulls under 200W during inference with the Q4 671B R1 model. That’s quite amazing for such performance!</p><p><a href="https://k2pk.com/tags/LLMs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLMs</span></a> <a href="https://k2pk.com/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://k2pk.com/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MachineLearning</span></a> <a href="https://k2pk.com/tags/Ollama" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Ollama</span></a> <a href="https://k2pk.com/tags/LMStudio" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LMStudio</span></a> <a href="https://k2pk.com/tags/GGUF" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GGUF</span></a> <a href="https://k2pk.com/tags/MLX" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MLX</span></a> <a href="https://k2pk.com/tags/TechReview" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TechReview</span></a> <a href="https://k2pk.com/tags/Benchmarking" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Benchmarking</span></a> <a href="https://k2pk.com/tags/MacStudio" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MacStudio</span></a> <a href="https://k2pk.com/tags/M3Ultra" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>M3Ultra</span></a> <a href="https://k2pk.com/tags/LocalLLM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LocalLLM</span></a> <a href="https://k2pk.com/tags/AIbenchmarks" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIbenchmarks</span></a> <a href="https://k2pk.com/tags/EnergyEfficient" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>EnergyEfficient</span></a> <a href="https://k2pk.com/tags/linux" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>linux</span></a></p>
Sergio ‘shadown’ Alvarez<p>Many asked the same question about <a href="https://infosec.exchange/tags/MLX" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MLX</span></a> being GPU optimized for Apple in my previous post, so…here you have <a href="https://infosec.exchange/tags/MLX" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MLX</span></a> (Apple) vs <a href="https://infosec.exchange/tags/MLC" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MLC</span></a> (Android) which is optimized for Android GPU. Again Comparing the iPhone 14 Pro (Sep 2022) vs Pixel 8 (Oct 2023)<br>Model Llama 3.2 3B Instruct Q4.</p>
Sergio ‘shadown’ Alvarez<p>iPhone 14 Pro (Sep 2022) with <a href="https://infosec.exchange/tags/MLX" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MLX</span></a> vs Pixel 8 (Oct 2023) with llama.cpp…</p>
Sergio ‘shadown’ Alvarez<p>Unleashing the power of <a href="https://infosec.exchange/tags/mlx" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>mlx</span></a> even more in <a href="https://infosec.exchange/tags/NetGesuchtAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NetGesuchtAI</span></a>, coming soon API Server compatible with ollama, <span class="h-card" translate="no"><a href="https://infosec.exchange/@radareorg" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>radareorg</span></a></span> hooked it to <a href="https://infosec.exchange/tags/r2ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>r2ai</span></a> and has been playing with it. Agents support are coming next… /cc @awnihannun<br>Btw system prompts, scripts and agents can import and export!</p>
Markus Eisele<p>Deploying LLMs locally with Apple’s MLX framework <a href="https://towardsdatascience.com/deploying-llms-locally-with-apples-mlx-framework-2b3862049a93" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">towardsdatascience.com/deployi</span><span class="invisible">ng-llms-locally-with-apples-mlx-framework-2b3862049a93</span></a><br><a href="https://mastodon.online/tags/aiml" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>aiml</span></a> <a href="https://mastodon.online/tags/llm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>llm</span></a> <a href="https://mastodon.online/tags/apple" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>apple</span></a> <a href="https://mastodon.online/tags/mlx" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>mlx</span></a></p>
Krzysztof Kołacz<p>Apple wydało MLX, czyli „framework do&nbsp;uczenia maszynowego dla procesorów Apple Silicon”.</p><blockquote><p>MLX został zaprojektowany przez&nbsp;badaczy uczenia maszynowego dla badaczy uczenia maszynowego. Ma&nbsp;być przyjazny dla użytkownika, ale&nbsp;nadal wydajny w&nbsp;trenowaniu i&nbsp;wdrażaniu nowych modeli. Projekt jest również koncepcyjnie prosty. Zamierzamy ułatwić badaczom rozszerzanie i&nbsp;ulepszanie MLX w&nbsp;celu szybkiego odkrywania nowych pomysłów.</p></blockquote><p>– czytamy w&nbsp;oficjalnej <a href="https://ml-explore.github.io/mlx/build/html/index.html" rel="nofollow noopener" target="_blank">dokumentacji</a> Apple.</p><p>Podstawowe cechy:</p><ul><li>Znane API: MLX posiada API Pythona, które jest ściśle zgodne z&nbsp;NumPy. MLX ma&nbsp;również w&nbsp;pełni funkcjonalne API C++, które ściśle odzwierciedla API Pythona. MLX ma&nbsp;pakiety wyższego poziomu, takie jak mlx.nn i&nbsp;mlx.optimizers z&nbsp;interfejsami API, które są&nbsp;ściśle zgodne z&nbsp;PyTorch, aby uprościć tworzenie bardziej złożonych modeli.</li><li>Komponowalne transformacje funkcji: MLX posiada transformacje funkcji do&nbsp;automatycznego różnicowania, automatycznej wektoryzacji i&nbsp;optymalizacji wykresów obliczeniowych.</li><li>„Leniwe” obliczenia: Obliczenia w&nbsp;MLX są&nbsp;wywoływane na&nbsp;żądanie. Tablice są&nbsp;materializowane tylko&nbsp;wtedy, gdy&nbsp;jest to&nbsp;potrzebne.</li><li>Dynamiczna konstrukcja grafów: Grafy obliczeniowe w&nbsp;MLX są&nbsp;budowane dynamicznie. Zmiana kształtu argumentów funkcji nie&nbsp;powoduje powolnych kompilacji, a&nbsp;debugowanie jest proste i&nbsp;intuicyjne.</li><li>Obsługa wielu urządzeń: Operacje mogą być wykonywane na&nbsp;dowolnym z&nbsp;obsługiwanych urządzeń (obecnie CPU i&nbsp;GPU).</li><li>Zunifikowana pamięć: Zauważalną różnicą w&nbsp;stosunku do&nbsp;MLX i&nbsp;innych frameworków jest zunifikowany model pamięci. Tablice w&nbsp;MLX znajdują się w&nbsp;pamięci współdzielonej. Operacje na&nbsp;tablicach MLX mogą być wykonywane na&nbsp;dowolnym z&nbsp;obsługiwanych typów urządzeń bez&nbsp;przenoszenia danych.</li></ul><blockquote><p>Just in&nbsp;time for the holidays, we&nbsp;are releasing some new software today from Apple machine learning research.</p><p>MLX is an efficient machine learning framework specifically designed for Apple silicon (i.e. your laptop!)</p><p>Code: <a href="https://t.co/Kbis7IrP80" rel="nofollow noopener" target="_blank">https://t.co/Kbis7IrP80</a><br>Docs: <a href="https://t.co/CUQb80HGut" rel="nofollow noopener" target="_blank">https://t.co/CUQb80HGut</a></p><p>— Awni Hannun (@awnihannun) <a href="https://twitter.com/awnihannun/status/1732184443451019431?ref_src=twsrc%5Etfw" rel="nofollow noopener" target="_blank">December 5, 2023</a></p></blockquote><p><a href="https://imagazine.pl/2023/12/07/apple-wydalo-framework-dla-uczenia-maszynowego-mlx/" rel="nofollow noopener" target="_blank">https://imagazine.pl/2023/12/07/apple-wydalo-framework-dla-uczenia-maszynowego-mlx/</a></p><p><a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://imagazine.pl/tag/framework/" target="_blank">#framework</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://imagazine.pl/tag/mlx/" target="_blank">#MLX</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://imagazine.pl/tag/programowanie/" target="_blank">#programowanie</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://imagazine.pl/tag/uczenie-maszynowe/" target="_blank">#uczenieMaszynowe</a></p>