101010.pl is one of the many independent Mastodon servers you can use to participate in the fediverse.
101010.pl czyli najstarszy polski serwer Mastodon. Posiadamy wpisy do 2048 znaków.

Server stats:

536
active users

#MLsec

3 posts3 participants1 post today
Gary McGraw<p>Reviewing this absolute garbage work that has a veneer of science. What a joke. If this is the kind of <a href="https://sigmoid.social/tags/MLsec" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MLsec</span></a> out of Europe that is supposed to save us, we're screwed.</p><p>Academic journals in security are utterly useless. <a href="https://sigmoid.social/tags/infosec" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>infosec</span></a> <a href="https://sigmoid.social/tags/security" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>security</span></a> </p><p><a href="https://www.sciencedirect.com/science/article/pii/S0167404824002931" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">sciencedirect.com/science/arti</span><span class="invisible">cle/pii/S0167404824002931</span></a></p>
noplasticshower<p><span class="h-card" translate="no"><a href="https://toot.cafe/@baldur" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>baldur</span></a></span> have you read our work? You might appreciate it. You can use it to shut those guys up.</p><p><a href="https://infosec.exchange/tags/MLsec" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MLsec</span></a> <a href="https://berryvilleiml.com/results/BIML-LLM24.pdf" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">berryvilleiml.com/results/BIML</span><span class="invisible">-LLM24.pdf</span></a></p>
Gary McGraw<p>One person's data pollution is another person's data gold. As long as we have next to zero insight into the immense training data sets used by LLMs this will happen again and again. Data protection fail groundhog day. <a href="https://sigmoid.social/tags/ML" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ML</span></a> <a href="https://sigmoid.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://sigmoid.social/tags/MLsec" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MLsec</span></a> <a href="https://sigmoid.social/tags/security" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>security</span></a></p><p><a href="https://www.darkreading.com/cyberattacks-data-breaches/deepseek-breach-opens-floodgates-dark-web" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">darkreading.com/cyberattacks-d</span><span class="invisible">ata-breaches/deepseek-breach-opens-floodgates-dark-web</span></a></p>
Gary McGraw<p>Benchmarks as popularity contests don't work. There are lots of other reasons that benchmarks have become almost worthless in <a href="https://sigmoid.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a>. In particular <a href="https://sigmoid.social/tags/ML" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ML</span></a> benchmark as badnessometer comes to mind. <a href="https://sigmoid.social/tags/MLsec" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MLsec</span></a> </p><p><a href="https://techcrunch.com/2025/04/22/crowdsourced-ai-benchmarks-have-serious-flaws-some-experts-say/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">techcrunch.com/2025/04/22/crow</span><span class="invisible">dsourced-ai-benchmarks-have-serious-flaws-some-experts-say/</span></a></p>
Gary McGraw<p>This coverage of COT in ML is misleadingly anthropomorphic. Have we really lost track of how these things work? Just because we call something "chain of thought" that doesn't make it ACTUAL chain of thought. Anthropic has always done this. <a href="https://sigmoid.social/tags/MLsec" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MLsec</span></a></p><p>And this is a usually excellent reporter falling prey to the nomenclature. </p><p><a href="https://arstechnica.com/ai/2025/04/researchers-concerned-to-find-ai-models-hiding-their-true-reasoning-processes/?utm_brand=arstechnica&amp;utm_social-type=owned&amp;utm_source=mastodon&amp;utm_medium=social" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">arstechnica.com/ai/2025/04/res</span><span class="invisible">earchers-concerned-to-find-ai-models-hiding-their-true-reasoning-processes/?utm_brand=arstechnica&amp;utm_social-type=owned&amp;utm_source=mastodon&amp;utm_medium=social</span></a></p>
Gary McGraw<p>Welp, I caught the BIML Bibliography up to 2024. LOL. A labor of love, that's for sure. Only three more months of papers to enter...</p><p>We keep track of the <a href="https://sigmoid.social/tags/MLsec" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MLsec</span></a> field by reading the science so you don't have to. </p><p>See our "top 5 papers" list to get started.</p><p><a href="https://sigmoid.social/tags/ML" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ML</span></a> <a href="https://sigmoid.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://sigmoid.social/tags/security" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>security</span></a> <a href="https://sigmoid.social/tags/infosec" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>infosec</span></a> </p><p><a href="https://berryvilleiml.com/bibliography/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">berryvilleiml.com/bibliography</span><span class="invisible">/</span></a></p>
Gary McGraw<p>GenAI is so expensive that some (foundation model building) companies will not survive. This is a major cause of technical lock in...scientific exploration of the space is incredibly expensive. </p><p>Who can do A/B testing when it is that expensive?</p><p><a href="https://sigmoid.social/tags/MLsec" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MLsec</span></a> <a href="https://sigmoid.social/tags/ML" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ML</span></a> <a href="https://sigmoid.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a></p><p><a href="https://www.theregister.com/2025/03/31/llm_providers_extinction/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">theregister.com/2025/03/31/llm</span><span class="invisible">_providers_extinction/</span></a></p>
Gary McGraw<p>As usual, <span class="h-card" translate="no"><a href="https://infosec.exchange/@dangoodin" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>dangoodin</span></a></span> has written an excellent security explainer article. This one is about prompt injection...but not the usual trial and error whack-a-mole prompt manipulation by pizza guy...instead, automated manipulation by search in gradient space. </p><p>This technique is new enough that we're discussing the original paper only today at BIML. It makes the whole boring front door malicious input thing much more interesting.</p><p>Have a read at the edge of <a href="https://sigmoid.social/tags/MLsec" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MLsec</span></a> <a href="https://sigmoid.social/tags/ML" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ML</span></a> <a href="https://sigmoid.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> </p><p><a href="https://arstechnica.com/security/2025/03/gemini-hackers-can-deliver-more-potent-attacks-with-a-helping-hand-from-gemini/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">arstechnica.com/security/2025/</span><span class="invisible">03/gemini-hackers-can-deliver-more-potent-attacks-with-a-helping-hand-from-gemini/</span></a></p>
noplasticshower<p><span class="h-card" translate="no"><a href="https://sauropods.win/@futurebird" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>futurebird</span></a></span> you might enjoy reading this work I did with my group on <a href="https://infosec.exchange/tags/MLsec" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MLsec</span></a></p><p>The reg wall is mostly pretend...</p><p><a href="https://berryvilleiml.com/results/BIML-LLM24.pdf" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">berryvilleiml.com/results/BIML</span><span class="invisible">-LLM24.pdf</span></a></p>
Gary McGraw<p>If deciding what is true in an ML training set is a political decision, then generating the "ground truth" with a synthetic generation algorithm is a great way to control model behavior.</p><p>Build a HOW machine to make a WHAT pile to train a WHAT machine. </p><p><a href="https://sigmoid.social/tags/MLsec" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MLsec</span></a> <a href="https://sigmoid.social/tags/ML" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ML</span></a> <a href="https://sigmoid.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://sigmoid.social/tags/security" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>security</span></a> </p><p><a href="https://www.wired.com/story/nvidia-gretel-acquisition-synthetic-training-data/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">wired.com/story/nvidia-gretel-</span><span class="invisible">acquisition-synthetic-training-data/</span></a></p>
Gary McGraw<p>Whoever decided to replace search engines with ML was either an idiot or a malicious demon <a href="https://sigmoid.social/tags/MLsec" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MLsec</span></a> <a href="https://sigmoid.social/tags/ML" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ML</span></a> <a href="https://sigmoid.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://sigmoid.social/tags/search" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>search</span></a> </p><p><a href="https://arstechnica.com/ai/2025/03/ai-search-engines-give-incorrect-answers-at-an-alarming-60-rate-study-says/?utm_brand=arstechnica&amp;utm_social-type=owned&amp;utm_source=mastodon&amp;utm_medium=social" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">arstechnica.com/ai/2025/03/ai-</span><span class="invisible">search-engines-give-incorrect-answers-at-an-alarming-60-rate-study-says/?utm_brand=arstechnica&amp;utm_social-type=owned&amp;utm_source=mastodon&amp;utm_medium=social</span></a></p>
Gary McGraw<p>Turns out that you can automate being wrong. It's easy! <a href="https://sigmoid.social/tags/MLsec" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MLsec</span></a> </p><p><a href="https://www.404media.co/ai-lawyer-hallucination-sanctions/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">404media.co/ai-lawyer-hallucin</span><span class="invisible">ation-sanctions/</span></a></p>
Gary McGraw<p>We all knew that insecure code was bad, but this is a riot! </p><p>Fine tune an LLM to insert vulnerable code, and its alignment goes haywire.</p><p><a href="https://sigmoid.social/tags/swsec" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>swsec</span></a> <a href="https://sigmoid.social/tags/appsec" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>appsec</span></a> <a href="https://sigmoid.social/tags/MLsec" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MLsec</span></a> <a href="https://sigmoid.social/tags/ML" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ML</span></a> <a href="https://sigmoid.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://sigmoid.social/tags/security" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>security</span></a> </p><p><a href="https://arstechnica.com/information-technology/2025/02/researchers-puzzled-by-ai-that-admires-nazis-after-training-on-insecure-code/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">arstechnica.com/information-te</span><span class="invisible">chnology/2025/02/researchers-puzzled-by-ai-that-admires-nazis-after-training-on-insecure-code/</span></a></p>
Gary McGraw<p>This has nothing to do with security and is not how to secure <a href="https://sigmoid.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a>. <a href="https://sigmoid.social/tags/MLsec" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MLsec</span></a> </p><p><a href="https://arstechnica.com/ai/2025/02/anthropic-dares-you-to-jailbreak-its-new-ai-model/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">arstechnica.com/ai/2025/02/ant</span><span class="invisible">hropic-dares-you-to-jailbreak-its-new-ai-model/</span></a></p>
Gary McGraw<p>We are so deep into data feudalism that even the plagerizers are whining. <a href="https://sigmoid.social/tags/MLsec" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MLsec</span></a> </p><p><a href="https://www.bloomberg.com/news/articles/2025-01-29/microsoft-probing-if-deepseek-linked-group-improperly-obtained-openai-data" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">bloomberg.com/news/articles/20</span><span class="invisible">25-01-29/microsoft-probing-if-deepseek-linked-group-improperly-obtained-openai-data</span></a></p><p><a href="https://www.lawfaremedia.org/article/why-the-data-ocean-is-being-sectioned-off" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">lawfaremedia.org/article/why-t</span><span class="invisible">he-data-ocean-is-being-sectioned-off</span></a></p>
noplasticshower<p><span class="h-card" translate="no"><a href="https://mastodon.social/@Daojoan" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>Daojoan</span></a></span> data,feudalism is here to stay <a href="https://infosec.exchange/tags/MLsec" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MLsec</span></a> <a href="https://www.lawfaremedia.org/article/why-the-data-ocean-is-being-sectioned-off" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">lawfaremedia.org/article/why-t</span><span class="invisible">he-data-ocean-is-being-sectioned-off</span></a></p>
Gary McGraw<p>Unleashing AI. So much for <a href="https://sigmoid.social/tags/MLsec" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MLsec</span></a>. It's all up to us now.</p><p><a href="https://www.whitehouse.gov/presidential-actions/2025/01/removing-barriers-to-american-leadership-in-artificial-intelligence/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">whitehouse.gov/presidential-ac</span><span class="invisible">tions/2025/01/removing-barriers-to-american-leadership-in-artificial-intelligence/</span></a></p>
Gary McGraw<p>Can governments control <a href="https://sigmoid.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a>? Are they powerful enough to rein in enormous tech companies? BIML is proud to play a critical independent role in <a href="https://sigmoid.social/tags/MLsec" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MLsec</span></a>. Hopefully what we're doing matters. Sometimes it's hard to tell.</p><p><a href="https://time.com/7204670/uk-ai-safety-institute/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">time.com/7204670/uk-ai-safety-</span><span class="invisible">institute/</span></a></p>
noplasticshower<p><span class="h-card" translate="no"><a href="https://mastodon.social/@chockenberry" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>chockenberry</span></a></span> recursive pollution is real <a href="https://infosec.exchange/tags/MLsec" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MLsec</span></a> </p><p><a href="https://berryvilleiml.com/2024/01/29/two-interesting-reads-on-llm-security/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">berryvilleiml.com/2024/01/29/t</span><span class="invisible">wo-interesting-reads-on-llm-security/</span></a></p>
Gary McGraw<p>I hate to tell you, but the ML/AI benchmarks are really no measure of intelligence. <a href="https://sigmoid.social/tags/MLsec" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MLsec</span></a> </p><p><a href="https://arstechnica.com/information-technology/2025/01/sam-altman-says-we-are-now-confident-we-know-how-to-build-agi/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">arstechnica.com/information-te</span><span class="invisible">chnology/2025/01/sam-altman-says-we-are-now-confident-we-know-how-to-build-agi/</span></a></p>