101010.pl is one of the many independent Mastodon servers you can use to participate in the fediverse.
101010.pl czyli najstarszy polski serwer Mastodon. Posiadamy wpisy do 2048 znaków.

Server stats:

475
active users

#gpt3

0 posts0 participants0 posts today

En train de me préparer une liste de blocage de hashtags destinée à nettoyer ma TL de tous les trucs sur les "IA génératives" (vraiment jpp, y'a des moments où ça ne parle que de ça…).
Actuellement j'en suis à : #AI #IA, #LLM, #ChatGPT, #GPT #GPT3, #GPT4, #GPT5 (oui je prends de l'avance), #GoogleGemini, #Copilot, #Bard, #BingChat, #LLama, #Mistral.

Vous en voyez d'autres ?

J'hésite à mettre #Gemini mais j'ai peur que ça bloque des pouets sur le protocole…

#gpt3 #threads #humor #parody

Bro: "Arr, ye scallywags! Mastodon be the finest sea, far better'n Threads, ye hear?"

Craig: "Avast, ye lubber! Threads be where the treasure's at. Ye be barking up the wrong mast."

Bro: "Ye dare? Mastodon's waters be clearer than yer murky Threads!"

Craig: "Ha! Thought I'd be nice today, let Mastodon aboard. But mark me words, it ain't worth me time, not by a long shot."

Bro: "Blimey! Me own words be turning against me. Mayhaps I slammed Threads too hard."

#gpt3 #threads #humor #parody

Masto: "Mastodon's for everyone! Uh, except 'thread' folks. But hey, look at my cool outfit, right? No fallacies here, just fashion!"

Threaddy: "That's so unfair, Masto! You can't just exclude 'thread' people. That's, like, name-ism! And talking about your clothes? Really? What does that have to do with anything?"

Masto: "But I'm really, really stylish! Doesn't that count for something? Let's just focus on how good I look and forget the rest."

Continued thread

Addendum 1

AutoML-GPT: Large Language Model for AutoML
arxiv.org/abs/2309.01125

* AutoML-GPT integrates comprehensive set of tools & libraries
* grants access to wide range of data preprocessing, feature engineering, & model selection algorithms
* conversational interface: users can specify requirements, constraints, evaluation metrics
* manages complexity of ML pipeline, sig. reduces time/effort req'd

#AutoML #AutoMLGPT GPT #GPT3 #GPT4 #LargeLanguageModels #LLM #LargeLanguageModels #ChatGPT

Dla chętnych w pracy dostaliśmy propozycję, że możemy wymyślić nazwę naszego przyszłego sklepu internetowego.

Oczywiście jestem zbyt leniwy żeby myśleć i "nieeeno, tyle to nie" raczej nie przejdzie, więc postanowiliśmy użyć gpt. Zaproponowałem żeby to było coś skandynawsko brzmiące, bo te wszystkie ikeje to zawsze tak lepiej brzmi.

Jego propozycja?

Viking Market.

Kurtyna.

#ai#chatgpt#gpt3
Replied in thread

Addendae 7

MindMap: Knowledge Graph Prompting Sparks Graph of Thoughts in LLM
arxiv.org/abs/2308.09729

* addendum 4
* prompt LLM w. knowledge graphs
* engages LLM w. ext. knowledge; elicits reasoning pathways
* prompting endows LLM capable of comprehending KG inputs
* mind map on which LLMs perform reasoning, generate answers
* ontology of knowledge
* GPT-3.5 prompted w. MindMap consistently outperforms GPT-4

The take that #gpt3 "has mastered analogical reasoning" is dangerous #bullshit. It's a predictive system, it understands nothing. What's going on is that analogies happen to be highly predictable. I can tell you from personal experience that it's dead easy to generate analogies without really understanding them. That's how I got 98th+ percentile on the SATs.
popsci.com/technology/gpt-3-la
[h/t @librarianshipwreck | mastodon.social/@librarianship]

Popular Science · GPT-3 is pretty good at taking the SATsIt has somehow mastered analogical reasoning, which has long been thought to be a 'uniquely human ability.'

Trying myself on unknown terrain: Just published a working paper about the use of Large Language Models for low-resource programming languages! 🖥️👋

The study shows that #LLM-s can be a useful for writing, understanding, improving and documenting code.

I choose #gretl + its domain-specific scripting language for #statistics + #econometrics for illustration.

Comments welcome!
arxiv.org/abs/2307.13018

#softwaredevelopment #computerscience #GPT3.5 #econtwitter

@gretl

arXiv.orgThe potential of LLMs for coding with low-resource and domain-specific programming languagesThis paper presents a study on the feasibility of using large language models (LLM) for coding with low-resource and domain-specific programming languages that typically lack the amount of data required for effective LLM processing techniques. This study focuses on the econometric scripting language named hansl of the open-source software gretl and employs a proprietary LLM based on GPT-3.5. Our findings suggest that LLMs can be a useful tool for writing, understanding, improving, and documenting gretl code, which includes generating descriptive docstrings for functions and providing precise explanations for abstract and poorly documented econometric code. While the LLM showcased promoting docstring-to-code translation capability, we also identify some limitations, such as its inability to improve certain sections of code and to write accurate unit tests. This study is a step towards leveraging the power of LLMs to facilitate software development in low-resource programming languages and ultimately to lower barriers to entry for their adoption.

Amplifying Limitations, Harms and Risks of Large Language Models
arxiv.org/abs/2307.04821

This article addresses hype around Artificial Intelligence (AI) & its capabilities, & the distraction provided by talk of sci-fi scenarios that might arise if AI should become sentient/super-intelligent.
It may also help those outside of the field to become more informed about some of the limitations of AI technology.

arXiv.orgAmplifying Limitations, Harms and Risks of Large Language ModelsWe present this article as a small gesture in an attempt to counter what appears to be exponentially growing hype around Artificial Intelligence (AI) and its capabilities, and the distraction provided by the associated talk of science-fiction scenarios that might arise if AI should become sentient and super-intelligent. It may also help those outside of the field to become more informed about some of the limitations of AI technology. In the current context of popular discourse AI defaults to mean foundation and large language models (LLMs) such as those used to create ChatGPT. This in itself is a misrepresentation of the diversity, depth and volume of research, researchers, and technology that truly represents the field of AI. AI being a field of research that has existed in software artefacts since at least the 1950's. We set out to highlight a number of limitations of LLMs, and in so doing highlight that harms have already arisen and will continue to arise due to these limitations. Along the way we also highlight some of the associated risks for individuals and organisations in using this technology.
Continued thread

At least it gave me a #GPT3 CV:

> "Bblfish" is the online handle of Henry Story, a computer programmer and philosopher who has been active in the development of various internet technologies, including the Semantic Web and WebID. He is known for his work on the FOAF (Friend of a Friend) project, which aims to create a machine-readable format for representing social networks.
>
> In addition to his technical work, Story has also written extensively on the social and ethical implications of technology, particularly in relation to issues of privacy, security, and identity. He has argued for the importance of user control over personal data, and has advocated for the development of decentralized technologies that give individuals more power over their online presence.
>
> Overall, Bblfish/Henry Story is a notable figure in the world of internet technology and philosophy, and his contributions have helped to shape the development of the web in important ways.

mathstodon.xyz/@bblfish/110244

Mathstodonbblfish (@bblfish@mathstodon.xyz)Attached: 1 image my #GPT3 CV: > "Bblfish" is the online handle of Henry Story, a computer programmer and philosopher who has been active in the development of various internet technologies, including the Semantic Web and WebID. He is known for his work on the FOAF (Friend of a Friend) project, which aims to create a machine-readable format for representing social networks. > > In addition to his technical work, Story has also written extensively on the social and ethical implications of technology, particularly in relation to issues of privacy, security, and identity. He has argued for the importance of user control over personal data, and has advocated for the development of decentralized technologies that give individuals more power over their online presence. > > Overall, Bblfish/Henry Story is a notable figure in the world of internet technology and philosophy, and his contributions have helped to shape the development of the web in important ways.

Unifying Large Language Models and Knowledge Graphs: A Roadmap
arxiv.org/abs/2306.08302

Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing & artificial intelligence, due to their emergent ability & generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. ...

arXiv.orgUnifying Large Language Models and Knowledge Graphs: A RoadmapLarge language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolving by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely, 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions.

AutoML in the Age of Large Language Models: Current Challenges, Future Opportunities and Risks
arxiv.org/abs/2306.08107

We envision that the two fields can radically push the boundaries of each other through tight integration(potentially disrupting both fields) ... we explore the potential of a symbiotic relationship between AutoML and LLMs.

* [Wikipedia] Automated machine learning: en.wikipedia.org/wiki/Automate

arXiv.orgAutoML in the Age of Large Language Models: Current Challenges, Future Opportunities and RisksThe fields of both Natural Language Processing (NLP) and Automated Machine Learning (AutoML) have achieved remarkable results over the past years. In NLP, especially Large Language Models (LLMs) have experienced a rapid series of breakthroughs very recently. We envision that the two fields can radically push the boundaries of each other through tight integration. To showcase this vision, we explore the potential of a symbiotic relationship between AutoML and LLMs, shedding light on how they can benefit each other. In particular, we investigate both the opportunities to enhance AutoML approaches with LLMs from different perspectives and the challenges of leveraging AutoML to further improve LLMs. To this end, we survey existing work, and we critically assess risks. We strongly believe that the integration of the two fields has the potential to disrupt both fields, NLP and AutoML. By highlighting conceivable synergies, but also risks, we aim to foster further exploration at the intersection of AutoML and LLMs.
#AutoML#GPT#GPT3