101010.pl is one of the many independent Mastodon servers you can use to participate in the fediverse.
101010.pl czyli najstarszy polski serwer Mastodon. Posiadamy wpisy do 2048 znaków.

Server stats:

516
active users

#finetuning

0 posts0 participants0 posts today

Good news, a pity that they compared with GPT-3.5 but it will probably also be true for the next generation of models.
"Our analysis shows that fine-tuning improves the performance of open-source LLMs, allowing them to match or even surpass zero-shot GPT 3.5 and GPT-4, though still lagging behind fine-tuned GPT
3.5. "
link.springer.com/article/10.1
#opensource #LLM #AI #finetuning

SpringerLinkOpen-source LLMs for text annotation: a practical guide for model setting and fine-tuning - Journal of Computational Social ScienceThis paper studies the performance of open-source Large Language Models (LLMs) in text classification tasks typical for political science research. By examining tasks like stance, topic, and relevance classification, we aim to guide scholars in making informed decisions about their use of LLMs for text analysis and to establish a baseline performance benchmark that demonstrates the models’ effectiveness. Specifically, we conduct an assessment of both zero-shot and fine-tuned LLMs across a range of text annotation tasks using news articles and tweets datasets. Our analysis shows that fine-tuning improves the performance of open-source LLMs, allowing them to match or even surpass zero-shot GPT $$-$$ - 3.5 and GPT-4, though still lagging behind fine-tuned GPT $$-$$ - 3.5. We further establish that fine-tuning is preferable to few-shot training with a relatively modest quantity of annotated text. Our findings show that fine-tuned open-source LLMs can be effectively deployed in a broad spectrum of text annotation applications. We provide a Python notebook facilitating the application of LLMs in text annotation for other researchers.

LoRA vs. Full Fine-Tuning: An Illusion of Equivalence: arxiv.org/abs/2410.21228 #llm #lora #finetuning #model

arXiv.orgLoRA vs Full Fine-tuning: An Illusion of EquivalenceFine-tuning is a crucial paradigm for adapting pre-trained large language models to downstream tasks. Recently, methods like Low-Rank Adaptation (LoRA) have been shown to match the performance of fully fine-tuned models on various tasks with an extreme reduction in the number of trainable parameters. Even in settings where both methods learn similarly accurate models, \emph{are their learned solutions really equivalent?} We study how different fine-tuning methods change pre-trained models by analyzing the model's weight matrices through the lens of their spectral properties. We find that full fine-tuning and LoRA yield weight matrices whose singular value decompositions exhibit very different structure; moreover, the fine-tuned models themselves show distinct generalization behaviors when tested outside the adaptation task's distribution. More specifically, we first show that the weight matrices trained with LoRA have new, high-ranking singular vectors, which we call \emph{intruder dimensions}. Intruder dimensions do not appear during full fine-tuning. Second, we show that LoRA models with intruder dimensions, despite achieving similar performance to full fine-tuning on the target task, become worse models of the pre-training distribution and adapt less robustly to multiple tasks sequentially. Higher-rank, rank-stabilized LoRA models closely mirror full fine-tuning, even when performing on par with lower-rank LoRA models on the same tasks. These results suggest that models updated with LoRA and full fine-tuning access different parts of parameter space, even when they perform equally on the fine-tuned distribution. We conclude by examining why intruder dimensions appear in LoRA fine-tuned models, why they are undesirable, and how their effects can be minimized.

A Single Degree Of #Freedom Is All You Need Dept:
Today's disbelievers in #FreeWill are the equivalent of tomorrow's #FlatEarthers and #Antivaxxers, unable to appreciate the vastness of a time dimension that stretches more than thirty seconds into the future. They've even flattened #Spacetime in order to justify their artifice. Never mind the possibility of a nominal seven extra dimensions they have no idea on. But...#FineTuning ! Yeah about that. Show me ur #QuantumGravity. I'll show you mine

Continued thread

【生成式AI】Finetuning vs. Prompting:對於大型語言模型的不同期待所衍生的兩類使用方式 (1/3) - YouTube
youtube.com/watch?v=F58vJcGgjt

【生成式AI】Finetuning vs. Prompting:對於大型語言模型的不同期待所衍生的兩類使用方式 (2/3) - YouTube
youtube.com/watch?v=aZ_jXZvxyV

【生成式AI】Finetuning vs. Prompting:對於大型語言模型的不同期待所衍生的兩類使用方式 (3/3) - YouTube
youtube.com/watch?v=HnzDaEiN_e

#ChatGPT #科普 #AI #Finetuning #Prompting

是不是可以说AI催眠师是现代炼金术士?