101010.pl is one of the many independent Mastodon servers you can use to participate in the fediverse.
101010.pl czyli najstarszy polski serwer Mastodon. Posiadamy wpisy do 2048 znaków.

Server stats:

575
active users

#riskmanagement

1 post1 participant0 posts today
Replied in thread

@elementary tl;dr I support your objectives, and kudos on the goal, but I think you should monitor this new policy for unexpected negative outcomes. I take about 9k characters to explain why, but I’m not criticizing your intent.

While I am much more pragmatic about my stance on #aicoding this was previously a long-running issue of contention on the #StackExchange network that was never really effectively resolved outside of a few clearly egregious cases.

The triple-net is that when it comes to certain parts of software—think of the SCO copyright trials over header files from a few decades back—in many cases, obvious code will be, well…obvious. That “the simplest thing that could possibly work” was produced by an AI instead of a person is difficult to prove using existing tools, and false accusations of plagiarism have been a huge problem that has caused a number of people real #reputationalharm over the last couple of years.

That said, I don’t disagree with the stance that #vibecoding is not worth the pixels that it takes up on a screen. From a more pragmatic standpoint, though, it may be more useful to address the underlying principle that #plagiarism is unacceptable from a community standards or copyright perspective rather than making it a tool-specific policy issue.

I’m a firm believer that people have the right to run their community projects in whatever way best serves their community members. I’m only pointing out the pragmatic issues of setting forth a policy where the likelihood of false positives is quite high, and the level of pragmatic enforceability may be quite low. That is something that could lead to reputational harm to people and the project, or to community in-fighting down the road, when the real policy you’re promoting (as I understand it) is just a fundamental expectation of “original human contributions” to the project.

Because I work in #riskmanagement and #cybersecurity I see this a lot. This is an issue that comes up more often than you might think. Again, I fully support your objectives, but just wanted to offer an alternative viewpoint that your project might want to revisit down the road if the current policy doesn’t achieve the results that you’re hoping for.

In the meantime, I certainly wish you every possible success! You’re taking a #thoughtleadership stance on an important #AIgovernance policy issue that is important to society and to #FOSS right now. I think that’s terrific!

🔓 Oracle finally admits to a major data breach—after being sued for hiding it.

Just days after being hit with a class-action lawsuit for allegedly covering up a major data breach, Oracle has begun privately notifying some customers of a security incident that compromised login credentials—including data from as recently as 2024.

Key highlights:
🔓 Hacker accessed usernames, passkeys, and encrypted passwords
💰 Extortion attempt reported
⏱️ Lawsuit claims Oracle failed to notify victims within 60 days
⚖️ Plaintiffs demand better security & transparency

Despite Oracle calling it an outdated system, the lawsuit points to risks that are very current. This is a critical moment for cloud providers to re-evaluate incident response protocols.

Full story: csoonline.com/article/3953644/

CSO Online · Oracle quietly admits data breach, days after lawsuit accused it of cover-upBy Gyana Swain

A colleague warned me today:
"Please stop thinking too long-term and too sustainable - you are damaging the company!"

I haven't laughed this hard since the last system failure. 🤖🤣
The picture captures corporate risk management and feature delivery perfectly:
Short-term focus with TikTok attention span.

#CorporateLogic #RiskManagement #ShortTermThinking
#SustainableThinking #SystemFailure #DarkHumor
#TechSatire #StartupCulture #EfficiencyKills #LeadershipGoals
#AutomationAddict #LongTermDamage

The Cyber Resilience Act shifts cybersecurity responsibility to manufacturers, requiring them to secure products before they’re sold. What does that mean for businesses and consumers? Listen in as we break it down! 🎙️👇

youtu.be/c30eG5kzqnY

youtu.be- YouTubeEnjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube.

🚀 NEW on We ❤️ Open Source 🚀

AI evaluation tools are a must for organizations deploying AI solutions. They provide real-time monitoring, risk assessment, and compliance tracking—ensuring AI remains ethical, secure, and effective.

John Willis explores why these tools are essential in modern AI governance. Read more: ⬇️

🔗 allthingsopen.org/articles/eva

openRiskScore is a #Python framework for risk scoring in both classic and federated/decentralized contexts.

The library aims to wrap popular machine learning frameworks as algorithmic backends and focuses on supporting high quality risk model development and maintenance.

Scoring tasks can be either pursued by a standalone entity (operating on its own data) or in #federation
(independent entities sharing some data sets).

github.com/open-risk/openRiskS

Abandoned S3 Buckets are a goldmine for hackers!

Last week, we shared new research revealing the alarming risks of abandoned S3 buckets. Now, cybersecurity experts @sherridavidoff and @MDurrin share more details on this new threat and provide advice on how to reduce your risk from this attack tactic that can expose you to supply chain compromises and remote code execution attacks.

Read our latest blog to learn how to protect your organization: lmgsecurity.com/abandoned-s3-b

LMG SecurityAbandoned S3 Buckets: A Goldmine for Hackers | LMG SecurityNew research revealed a chilling reality: abandoned S3 buckets are a new attack vector. Learn more about these attacks & how to reduce your organization's risk.

Open Source AI Models are a growing cybersecurity risk.

Organizations are increasingly using AI models from repositories like Hugging Face and TensorFlow Hub—but are they considering the hidden cybersecurity risks? Attackers are slipping malicious code into AI models, bypassing security checks, and exploiting vulnerabilities.

New research shows that bad actors are leveraging open-source AI models to introduce backdoors, execute arbitrary code, and even manipulate model outputs. If your team is developing AI solutions, now is the time to secure your AI supply chain by:

🔹 Vetting model sources rigorously
🔹 Avoiding vulnerable data formats like Pickle
🔹 Using safer alternatives like Safetensors
🔹 Managing AI models like any other open-source dependency

As AI adoption skyrockets, you must proactively safeguard your models against supply chain threats. Check out the full article to learn more: darkreading.com/cyber-risk/ope

www.darkreading.comOpen Source AI Models: Big Risks for Malicious Code, VulnsCompanies pursing internal AI development using models from Hugging Face and other repositories need to focus on supply chain security and checking for vulnerabilities.

#python #algotrading #algorithm
#riskmanagement #backtesting
#crypto #btc #bitcoin
#stock #finance #fintech
#technology

Follow-Up:
Make Bitcoin Great Again

Turtle BTC Algorithmic Trading with Technical Analysis & Backtesting in Python

Unlock the Potential of Turtle Trading in the BTC-USD Market

By using several quant algorithms to backtest the performance of the strategy in the BTC market, this study evaluates the PoS of BTC-USD.

#exploremore 👇

medium.com/@alexzap922/turtle-

Published my 3rd blog post today! It's a book list about the politics of climate risk in urban environments and ways we should think about navigating them. Will use my Bookshop.org affiliate links to donate to the California Fire Foundation’s Wildfire & Disaster Relief Fund.

My main goal, though, is to spark discussion. Join my growing community! Would love more book recommendations or topics to cover.

misaligned.markets/rising-risk

Misaligned Markets · Book list - Rising Risks: Navigating the Modern Risk SocietyComplex risks, fueled by abstract and sometimes interrelated crises, are becoming a common feature of the modern world and our politics.

#python #algorithm #algotrading
#bitcoin #crypto #risk #riskmanagement #volatility #fintech #finance #testing
👉 Backtesting, Optimizing & Combining Multiple Algorithmic Trading Strategies Effectively: Bitcoin Use Case

👉 Integrated Techniques to Prevent Overfitting & False Confidence in Technical Analysis Performance Evaluation

#exploremore 👇

medium.com/@alexzap922/backtes