101010.pl is one of the many independent Mastodon servers you can use to participate in the fediverse.
101010.pl czyli najstarszy polski serwer Mastodon. Posiadamy wpisy do 2048 znaków.

Server stats:

477
active users

#reproducibility

1 post1 participant0 posts today

🚀 We’re all set up at useR! 2025!
📍 Penn Pavilion, Duke University, Durham, NC
🗓️ August 8–10

The Digital Research Academy booth is ready to welcome you — and yes, the pins and swag are looking fantastic! 🎁✨

If you’re at #useR2025, we’d love to meet you — whether you’re a trainer, researcher, or just curious about better, fairer science.

📍 Find us, say hi, and let’s chat!

Dear HPC users,

We invite you to take part in a research survey that explores how researchers approach computational reproducibility, especially when working with HPC and cloud infrastructure. The goal is to better understand current practices, challenges, and needs around reproducible research workflows.

Survey link:
👉 ec.europa.eu/eusurvey/runner/c

The survey takes approximately 10 minutes. It is anonymous and entirely voluntary. The results will be published in a research paper and also contribute to shaping best practices and training resources that support reproducible science, such as those developed in the de.KCD project.

The survey is open until August 31st, 2025.

Your time and input are greatly appreciated in advancing more reproducible and reliable computational research.

We're now in the middle of the special roundtable session on "Living With Machines: Comparative Literature, AI, and the Ethics of Digital Imagination".

I'll speak briefly about different kinds of AI (generative LLMs, non-generative LLMs, deep learning, machine learning); when to use which, and most important, when NOT to use LLMs; as well as some best practices for #transparency, #reproducibility and #sustainability in this context.

Details: conftool.pro/icla2025/index.ph

www.conftool.pro2025 ICLA Congress - ConfTool Pro - BrowseSessions

Retractions and failures to replicate are signs of weak research. But they're also signs of laudable and necessary efforts to identify weak research and improve future research. The #Trump admin is systematically weaponizing these efforts to cast doubt on science as such.

"Research-integrity sleuths say their work is being ‘twisted’ to undermine science."
nature.com/articles/d41586-025

www.nature.comResearch-integrity sleuths say their work is being ‘twisted’ to undermine scienceSome sleuths fear that the business of cleaning up flawed studies is being weaponized against science itself.

And yet another one in the ever increasing list of analyses showing that top journals are bad for science:

"Thus, our analysis show major claims published in low-impact journals are significantly more likely to be reproducible than major claims published in trophy journals. "

biorxiv.org/content/10.1101/20

bioRxiv · A retrospective analysis of 400 publications reveals patterns of irreproducibility across an entire life sciences research fieldThe ReproSci project retrospectively analyzed the reproducibility of 1006 claims from 400 papers published between 1959 and 2011 in the field of Drosophila immunity. This project attempts to provide a comprehensive assessment, 14 years later, of the replicability of nearly all publications across an entire scientific community in experimental life sciences. We found that 61% of claims were verified, while only 7% were directly challenged (not reproducible), a replicability rate higher than previous assessments. Notably, 24% of claims had never been independently tested and remain unchallenged. We performed experimental validations of a selection of 45 unchallenged claim, that revealed that a significant fraction (38/45) of them is in fact non-reproducible. We also found that high-impact journals and top-ranked institutions are more likely to publish challenged claims. In line with the reproducibility crisis narrative, the rates of both challenged and unchallenged claims increased over time, especially as the field gained popularity. We characterized the uneven distribution of irreproducibility among first and last authors. Surprisingly, irreproducibility rates were similar between PhD students and postdocs, and did not decrease with experience or publication count. However, group leaders, who had prior experience as first authors in another Drosophila immunity team, had lower irreproducibility rates, underscoring the importance of early-career training. Finally, authors with a more exploratory, short-term engagement with the field exhibited slightly higher rates of challenged claims and a markedly higher proportion of unchallenged ones. This systematic, field-wide retrospective study offers meaningful insights into the ongoing discussion on reproducibility in experimental life sciences ### Competing Interest Statement The authors have declared no competing interest. Swiss National Science Foundation, 310030_189085 ETH-Domain’s Open Research Data (ORD) Program (2022)

To my knowledge, first time that not only prestigious journals, but also prestigious institutions are implicated as major drivers of irreproducibility:

"Higher representation of challenged claims in trophy journals and from top universities"

biorxiv.org/content/10.1101/20

bioRxiv · A retrospective analysis of 400 publications reveals patterns of irreproducibility across an entire life sciences research fieldThe ReproSci project retrospectively analyzed the reproducibility of 1006 claims from 400 papers published between 1959 and 2011 in the field of Drosophila immunity. This project attempts to provide a comprehensive assessment, 14 years later, of the replicability of nearly all publications across an entire scientific community in experimental life sciences. We found that 61% of claims were verified, while only 7% were directly challenged (not reproducible), a replicability rate higher than previous assessments. Notably, 24% of claims had never been independently tested and remain unchallenged. We performed experimental validations of a selection of 45 unchallenged claim, that revealed that a significant fraction (38/45) of them is in fact non-reproducible. We also found that high-impact journals and top-ranked institutions are more likely to publish challenged claims. In line with the reproducibility crisis narrative, the rates of both challenged and unchallenged claims increased over time, especially as the field gained popularity. We characterized the uneven distribution of irreproducibility among first and last authors. Surprisingly, irreproducibility rates were similar between PhD students and postdocs, and did not decrease with experience or publication count. However, group leaders, who had prior experience as first authors in another Drosophila immunity team, had lower irreproducibility rates, underscoring the importance of early-career training. Finally, authors with a more exploratory, short-term engagement with the field exhibited slightly higher rates of challenged claims and a markedly higher proportion of unchallenged ones. This systematic, field-wide retrospective study offers meaningful insights into the ongoing discussion on reproducibility in experimental life sciences ### Competing Interest Statement The authors have declared no competing interest. Swiss National Science Foundation, 310030_189085 ETH-Domain’s Open Research Data (ORD) Program (2022)
Continued thread

Jack Taylor is now presenting a new #Rstats package: "LexOPS: A Reproducible Solution to Stimuli Selection". Jack bravely did a live demonstration based on a German corpus ("because we're in Germany") that generated matched stimuli that certainly made the audience giggle... let's just say that one match involved the word "Erektion"... 😂

There is a paper about the LexOPS package: link.springer.com/article/10.3 and a detailed tutorial: jackedtaylor.github.io/LexOPSd. Also a #Shiny app for those who really don't want to use R, but that allows code download for #reproducibility: jackedtaylor.github.io/LexOPSd Really cool and useful project! #WoReLa1 #linguistics #psycholinguistics

Continued thread

7/ Wei Mun Chan, Research Integrity Manager

With 10+ years in publishing and data curation, Wei Mun ensures every paper meets our high standards for ethics and #reproducibility. From image checks to data policies, he’s the quiet force keeping the scientific record trustworthy.

TODAY (Monday) 16-17:30 CEST #ReproducibiliTea in the HumaniTeas goes qualitative! ✨ Nathan Dykes (Department of #DigitalHumanities and Social Studies @FAU) will give a 20-min input talk entitled "Beyond the gold standard: Transparency in qualitative corpus analysis", followed by a 60-min open discussion on applying the principles of #OpenScience to qualitative research. 🤓

Everyone is welcome, whether on-site @unibibkoeln (where you can also enjoy a range of teas and snacks) or online via Zoom. Please join our mailing list to receive the Zoom link: lists.uni-koeln.de/mailman/lis (or DM me if you read this after 14:00 CEST). 🫖🍪