101010.pl is one of the many independent Mastodon servers you can use to participate in the fediverse.
101010.pl czyli najstarszy polski serwer Mastodon. Posiadamy wpisy do 2048 znaków.

Server stats:

504
active users

#acl2023

0 posts0 participants0 posts today
Replied in thread

Presenting Riveter 💪, a Python package to measure social dynamics between personas mentioned in text.

Given a verb lexicon, Riveter 💪 can extract entities and visualize relationships between them.

Package: github.com/maartensap/riveter-

Paper: maartensap.com/pdfs/antoniak20

Video: youtube.com/watch?v=Uftyd8eCmF

Demo Notebook: github.com/maartensap/riveter-

With Anjalie Field, Jimin Mun, Melanie Walsh, Lauren Klein, Maarten Sap

📣 Check out our new paper "DARE: Towards Robust Text Explanations in Biomedical and Healthcare Applications", oral at @aclmeeting by lead author Adam Ivankay this Wednesday!

We show adversarial attacks to #explainability methods for #DeepNeuralNetworks in technical text domains, propose a quantification of this problem, and initial solutions.

📊 Presentation: virtual2023.aclweb.org/paper_P
📄 Paper: arxiv.org/abs/2307.02094
💻 Code: github.com/ibm/domain-adaptive

How can we explore hidden biases in language models impacting fairness? Our #ACL2023 demo paper introduces Finspector, an interactive visualization widget available as a Python package for Jupyter, that helps uncover these biases.

Paper, Video, Code: bckwon.com/publication/finspec
arXiv: arxiv.org/abs/2305.16937
GitHub: github.com/IBM/finspector

Our paper showcases a use case and discusses implications, limitations, and future work.

What do we learn from modeling second language acquisition (SLA)? Read our paper for #ACL2023 to find out about the importance of *negative* transfer + which elements of child-directed speech do and don't survive in text-based language models + a bonus new multi-lingual CDS corpus.

arxiv.org/abs/2305.19589

arXiv.orgSLABERT Talk Pretty One Day: Modeling Second Language Acquisition with BERTSecond language acquisition (SLA) research has extensively studied cross-linguistic transfer, the influence of linguistic structure of a speaker's native language [L1] on the successful acquisition of a foreign language [L2]. Effects of such transfer can be positive (facilitating acquisition) or negative (impeding acquisition). We find that NLP literature has not given enough attention to the phenomenon of negative transfer. To understand patterns of both positive and negative transfer between L1 and L2, we model sequential second language acquisition in LMs. Further, we build a Mutlilingual Age Ordered CHILDES (MAO-CHILDES) -- a dataset consisting of 5 typologically diverse languages, i.e., German, French, Polish, Indonesian, and Japanese -- to understand the degree to which native Child-Directed Speech (CDS) [L1] can help or conflict with English language acquisition [L2]. To examine the impact of native CDS, we use the TILT-based cross lingual transfer learning approach established by Papadimitriou and Jurafsky (2020) and find that, as in human SLA, language family distance predicts more negative transfer. Additionally, we find that conversational speech data shows greater facilitation for language acquisition than scripted speech data. Our findings call for further research using our novel Transformer-based SLA models and we would like to encourage it by releasing our code, data, and models.

Yay! 2/2 papers accepted at #ACL2023 !

First, in the main conference, there's
"FIREBALL: A Dataset of Dungeons and Dragons Actual-Play with Structured Game State Information" by @zhuexe, Karmanya Aggarwal, Alex Feng, myself, & @ccb

Contributions:
- A corpus of data from people playing #DnD on Discord using a bot called #Avrae, made by @zhuexe himself. Avrae tracks vital game state information for D&D.
- #LLMs "translating" Avrae commands into plain English.

arXiv link TBA!
1/2