101010.pl is one of the many independent Mastodon servers you can use to participate in the fediverse.
101010.pl czyli najstarszy polski serwer Mastodon. Posiadamy wpisy do 2048 znaków.

Server stats:

508
active users

#AIGovernance

0 posts0 participants0 posts today
Replied in thread

@elementary tl;dr I support your objectives, and kudos on the goal, but I think you should monitor this new policy for unexpected negative outcomes. I take about 9k characters to explain why, but I’m not criticizing your intent.

While I am much more pragmatic about my stance on #aicoding this was previously a long-running issue of contention on the #StackExchange network that was never really effectively resolved outside of a few clearly egregious cases.

The triple-net is that when it comes to certain parts of software—think of the SCO copyright trials over header files from a few decades back—in many cases, obvious code will be, well…obvious. That “the simplest thing that could possibly work” was produced by an AI instead of a person is difficult to prove using existing tools, and false accusations of plagiarism have been a huge problem that has caused a number of people real #reputationalharm over the last couple of years.

That said, I don’t disagree with the stance that #vibecoding is not worth the pixels that it takes up on a screen. From a more pragmatic standpoint, though, it may be more useful to address the underlying principle that #plagiarism is unacceptable from a community standards or copyright perspective rather than making it a tool-specific policy issue.

I’m a firm believer that people have the right to run their community projects in whatever way best serves their community members. I’m only pointing out the pragmatic issues of setting forth a policy where the likelihood of false positives is quite high, and the level of pragmatic enforceability may be quite low. That is something that could lead to reputational harm to people and the project, or to community in-fighting down the road, when the real policy you’re promoting (as I understand it) is just a fundamental expectation of “original human contributions” to the project.

Because I work in #riskmanagement and #cybersecurity I see this a lot. This is an issue that comes up more often than you might think. Again, I fully support your objectives, but just wanted to offer an alternative viewpoint that your project might want to revisit down the road if the current policy doesn’t achieve the results that you’re hoping for.

In the meantime, I certainly wish you every possible success! You’re taking a #thoughtleadership stance on an important #AIgovernance policy issue that is important to society and to #FOSS right now. I think that’s terrific!

🤝 Joined the AI Verify Foundation to strengthen our commitment to responsible AI testing 💚

The AI Verify Foundation, backed by Singapore's IMDA, brings together the global open-source community to develop AI testing frameworks and promote industry standards. Their work is crucial as organizations worldwide seek concrete ways to implement responsible AI practices.
[1/2] 👇

Predictive AI technology has a well-known issue: how do we know the component is reliable, and can we trust the result enough to use it in our work? Learn more on how DW Innovation has found ways to make an AI-powered service more transparent and secure, assisting not only end-users, but also the process of AI governance.

innovation.dw.com/articles/ai-

innovation.dw.comAI in Media Tools: How to Increase User Trust and Support AI GovernancePrinciples like Trustworthy AI are well documented – but what about their implementation?

Interesting data from a new edition of the Foundation Model Transaprency Index - collected six months after the initial index was released.

Overall, there's big improvement, with average score jumping from 37 to 58 point (out of a 100). That's a lot!

The interesting fact is that researchers contacted developers and solicited data - interactions count.

More importantly, there is little improvement, and little overall transparency in a category that researchers describe as "upstream": on data, labour and compute that goes into training. And "data access" gets the lowest score of all the parameters.

More at Tech Policy Press: techpolicy.press/the-foundatio

Tech Policy Press · The Foundation Model Transparency Index: What Changed in 6 Months? | TechPolicy.PressFourteen model developers provided transparency reports on each of 100 indicators devised by Stanford, Princeton, and Harvard researchers.