101010.pl is one of the many independent Mastodon servers you can use to participate in the fediverse.
101010.pl czyli najstarszy polski serwer Mastodon. Posiadamy wpisy do 2048 znaków.

Server stats:

508
active users

#algos

0 posts0 participants0 posts today
SirHendrick<p><a href="https://noauthority.social/tags/YouTube" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>YouTube</span></a> now lets you look for a video, use all the correct key words and let it come up blank. <br>Then weeks later it puts what you were looking for in the list of suggested videos. <br>They're fucking with us… <br><a href="https://noauthority.social/tags/Algos" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Algos</span></a> <a href="https://noauthority.social/tags/gaslighting" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>gaslighting</span></a></p>
🎓 Doc Freemo :jpf: 🇳🇱<p>In case anyone wants to check it out, this is my dev computer. I work on massively parallel algorithms a lot of the time so I need some really heavy duty GPUs for GPGPU (those are 4 watercooled Radeon Vega Frontier Edition GPUs) and a 32 core Ryzen 3.7 Ghz threadRipper CPU, and 64 gigs of the fastest ram around. this baby is a **beast**</p><p>inb4 why the Radeon GPUs and not NVIDIA or an Intel chip... simple, they are better for many/most applications but not all. The CPU and motherboard supports the full number of channels to maximize throughput into and out of the GPUs. So while they have fewer cores compared to a Tesla the cores dont tend to be the bottleneck, the I/O is, so i sacrificed cores to max out the I/O channels where the bottleneck tends to be. AMD and Ryzen ThreadRipper CPU were the only ones doing that.</p><p><a href="https://qoto.org/tags/GPGPU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPGPU</span></a> <a href="https://qoto.org/tags/Parallel" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Parallel</span></a> <a href="https://qoto.org/tags/Parallelism" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Parallelism</span></a> <a href="https://qoto.org/tags/GPU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPU</span></a> <a href="https://qoto.org/tags/programming" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>programming</span></a> <a href="https://qoto.org/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://qoto.org/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MachineLearning</span></a> <a href="https://qoto.org/tags/Algo" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Algo</span></a> <a href="https://qoto.org/tags/algos" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>algos</span></a> <a href="https://qoto.org/tags/Algorithms" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Algorithms</span></a> <span class="h-card"><a href="https://groups.qoto.org/@Science" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>Science</span></a></span></p>