101010.pl is one of the many independent Mastodon servers you can use to participate in the fediverse.
101010.pl czyli najstarszy polski serwer Mastodon. Posiadamy wpisy do 2048 znaków.

Server stats:

497
active users

#ais

0 posts0 participants0 posts today
Schneier on Security · AIs as Trusted Third Parties - Schneier on SecurityThis is a truly fascinating paper: “Trusted Machine Learning Models Unlock Private Inference for Problems Currently Infeasible with Cryptography.” The basic idea is that AIs can act as trusted third parties: Abstract: We often interact with untrusted parties. Prioritization of privacy can limit the effectiveness of these interactions, as achieving certain goals necessitates sharing private data. Traditionally, addressing this challenge has involved either seeking trusted intermediaries or constructing cryptographic protocols that restrict how much data is revealed, such as multi-party computations or zero-knowledge proofs. While significant advances have been made in scaling cryptographic approaches, they remain limited in terms of the size and complexity of applications they can be used for. In this paper, we argue that capable machine learning models can fulfill the role of a trusted third party, thus enabling secure computations for applications that were previously infeasible. In particular, we describe Trusted Capable Model Environments (TCMEs) as an alternative approach for scaling secure computation, where capable machine learning model(s) interact under input/output constraints, with explicit information flow control and explicit statelessness. This approach aims to achieve a balance between privacy and computational efficiency, enabling private inference where classical cryptographic solutions are currently infeasible. We describe a number of use cases that are enabled by TCME, and show that even some simple classic cryptographic problems can already be solved with TCME. Finally, we outline current limitations and discuss the path forward in implementing them...

Training #AIs, whether #LLMs or #LMMs, is all about data.

They perform tasks within the distribution of their training data, and in the same level of competence.

This also applies to reinforcement learning; if you look behind the scenes, what happens is supervised learning with extra steps. The error is still backpropagated back through the network, but the objectives are slightly different. Instead of direct task performances to imitate, we'll get the feedback from exploration and rewards, basically the most successful explorations can be understood as the data.

So, the data is where the buck stops when an #AI makes a misjudgement. When designing data pipelines, refinement processes and exploratory games for your AIs, you can forget about everything you know about the mechanistic aspects of Transformer architectures, and just simply focus on how to improve the data.

The data is better if it can be used to train better models. Better models have higher competence in skills, higher volume of relevant knowledge, and higher coverage in task generalism and flexibility. Hence the best data represents exactly those aspects maximally.

Human level is not the gold standard and neither is raw real-world data. We can design processes which refine the data without apparent limits, and make that data alive through AIs trained with it.

It is all about reasoning now. #LLM chatbots are already superhuman in their vast knowledge, and no human can ever compete with their billions of parameters in capacity.

Indeed, we aren't benchmarking their knowledge against single human level, but against the totality of humankind.

But to get the most value out of this knowledge, these #AIs need to be able to execute cognitive skills as instructed on this knowledge. Reasoning is one of such skills, "system 2" in Kahneman's categorization. But it's not just about building an architecture which is in principle capable of system 2 like thinking.

Reasoning is not a single skill, it is an open bag of skills which require mental exploration and which allow producing more knowledge out of already crystallized knowledge.

For example, there are skills needed for solving mathematical equations, multiplication, division and so on. These are different mental skills than skills needed for constructing and evaluating mathematical proofs. Furthermore, reasoning skills related to engineering tasks, scientific work, medical evaluations, and so on, are all slightly different.

Although there are some ridiculously trasferrable reasoning skills which apply across many domains, many domains also have their own reasoning skills which differ from the same in other domains.

We can teach these skills to the AIs similarly as we taught knowledge to them; first by presenting them to them in volumes, and then letting them practice them and improve while doing it.

The same as with humans, but in the end we get AIs which rival the whole of humanity in volumes of available knowledge, but also in cognitive skills, and are able to use their universalist expertise across all domains.

This alone, even without surpassing humanity's intelligence, will alone mean a golden age never seen before. Imagine the repercussions.

Replied in thread

to the #Kavkaz port on the #Russian coast of the #Kerch_Strait

However, some voyages are currently delayed for unknown reasons, while other vessels have turned off the #AIS vessel identification system, so it is impossible to determine the location of the tankers.

One such vessel is the #Volgoneft_109. On December 17, just two days after the disaster involving #Volgoneft_239 and #Volgoneft_212 in the #Kerch_Strait, its captain issued a distress signal