101010.pl is one of the many independent Mastodon servers you can use to participate in the fediverse.
101010.pl czyli najstarszy polski serwer Mastodon. Posiadamy wpisy do 2048 znaków.

Server stats:

485
active users

#neuroai

0 posts0 participants0 posts today

Come along to my (free, online) UCL NeuroAI talk next week on neural architectures. What are they good for? All will finally be revealed and you'll never have to think about that question again afterwards. Yep. Definitely that.

🗓️ Wed 12 Feb 2025
⏰ 2-3pm GMT
ℹ️ Details and registration: eventbrite.co.uk/e/ucl-neuroai

EventbriteUCL NeuroAI Talk SeriesA series of NeuroAI themed talks organised by the UCL NeuroAI community. Talks will continue on a monthly basis.

Introducing our brand new course on NeuroAI​ where we ask "What are the common principles of natural and artificial intelligence?"

Apply now: neuromatch.io/neuroai-course 💻🤖

Please note that this course is aimed at a more advanced audience than our other courses.

Student Applications Close Sunday, March 24 midnight in the last time zone on Earth.

We charge low, regionally adjusted tuition fees for students, and offer fee waivers where needed without impact on admission.

TA Applications Close Sunday, March 17 midnight in the last time zone on Earth.

Teaching Assistants are paid, full-time, temporary, contracted roles.

@neuromatch@a.gup.pe @academicchatter

I’ve noticed a strong alignment between those who think that the computer metaphor for the brain makes little sense and those who’ve thought about how the brain might give rise to emotion.

As much as I love all the progress happening in NeuroAI to push our understanding of perception, memory & intelligence forward, I very much think they are right - there’s a crucial swath that doesn’t seem to fit with that agenda.

On explanations in brain research:

A thread of the same idea comes up again and again in brain research. It's the notion that identifying the biological details (such as the brain areas/circuits or neurotransmitters) associated with some brain function (like seeing or fear or memory) is not a complete explanation of how the brain gives rise to that function (even if you can demonstrate the links are causal). To paraphrase:

Mountcastle: Where is not how hup.harvard.edu/catalog.php?is
Marr: How is not what or why mechanism.ucsd.edu/teaching/f1
@MatteoCarandini: Links from circuits to behavior are a "bridge too far" nature.com/articles/nn.3043
Krakauer et al: Describing that is not understanding how cell.com/neuron/pdf/S0896-6273
Poppel: Understanding brain maps does not formulate "what about" the brain gives rise to "what about" behavior ncbi.nlm.nih.gov/pmc/articles/

Any other explicit references to add to this list? @Iris, @knutson_brain, Anyone?

Also, I imagine that some form of the opposite idea must also be percolating: the notion that 'algorithmic' descriptions of the type used to build AI will be insufficient to do things like treat brain dysfunction (where we arguably need to know more about the biology to, e.g., create drugs). Any explicit references of that idea? @albertcardona @schoppik, @cyrilpedia, Anyone?

www.hup.harvard.eduPerceptual Neuroscience — Vernon B. MountcastleThis monumental work by one of the world's greatest living neuroscientists does nothing short of creating a new subdiscipline in the field: perceptual neuroscience. Vernon Mountcastle has gathered information from a vast number of sources reaching back through two centuries, from phylogenetic, comparative, and neuroanatomical studies of the neocortex to rhythmicity and synchronization in neocortical networks and inquiries into the binding problem.

Choice blindness. What!?

Learned about this in a new book by @summerfieldlab:

One fascinating instance of this is the phenomenon known as choice blindness. In a study conducted in Sweden, people were asked to fill out question- naires about their political views. After submitting their answers, they received them back and were asked to verbally explain their views. Unbeknownst to participants, researchers switched the answer sheets, so that left-leaning people received right-leaning answers back, and vice versa. Of the 75% who failed to notice, many were happy to provide elaborate justifications for political positions opposing their own, apparently blind to the choices they had just made. Similar effects have been described with preferences about facial attractiveness and the taste of tea or jam. Choice blindness is an instance of post hoc rationalization, the tendency to invent motives in the light of actions, rather than choosing actions to satisfy motives.

See (Hall et al. 2010), (Johansson et al. 2005), and (Strandberg et al. 2020).

global.oup.com/academic/produc

global.oup.comNatural General IntelligenceSince the time of Turing, computer scientists have dreamed of building artificial general intelligence (AGI) - a system that can think, learn and act as humans do. Over recent years, the remarkable pace of progress in machine learning research has reawakened discussions about AGI.

Picking up on some of the BIG IDEAS in brain research, which was wonderfully chaotic when we last discussed in December under the hashtag #BrainIdeasCountdown, e.g. neuromatch.social/@NicoleCRust

Here's an attempt to fill in some blanks, and let's flip the hashtag: #BigBrainIdeas. I'll focus on the notion that there are facts, ideas and then there are "Big Ideas" and I'll focus on the last one. Please join in!

I'd argue that one of the most influential Big Ideas about the brain in the latter half of the 20th century is the is the notion that:

The neocortex of the brain is made up of a generic functional element that is repeated again and again and from this repetition, all of cortical function emerges

I'm talking about the cortical column, first described by Vernon Mountcastle in 1957. The unit contains ~10K neurons and humans have ~25 million of them. The rapid evolution of humans is proposed to have followed from a rapid expansion of cortex that happened because of this repetitive crystalline structure. The gist behind the "functional" bit is that each unit always does the same generic computation, and the different functions of different brain areas result from the different inputs that these units receive. @TrackingActions very nicely summarizes the ideas here: nature.com/articles/s41583-022

So what does this generic functional unit do? Proposals vary. One idea, also reflected in deep convolutional neural networks, is that it does two(ish) things: selectivity and invariance, stacked repetitively to support things like recognizing objects. Other proposals suggest that the brain is a prediction machine and each unit contributes a little bit to those predictions in a manner that relies not just on feedforward connectivity, but also feedback. Some proposals suggest that the function of the unit varies along a gradient as a consequence of biophysical properties like receptor expression: nature.com/articles/s41583-020.

Among brain researchers, this Big Idea is polarizing - obvious to some and misguided to others. Where are you in terms of your 'buy in' with this big idea?

#neuroscience #psychology #neuroAI #cognition @cogneurophys #BigBrainIdeas

Neuromatch SocialNicole Rust (@NicoleCRust@neuromatch.social)Here's a slightly more provocative way to pose the question: In The Idea of the Brain, Matthew Cobb argues, "In reality, no major conceptual innovation has been made in our overall understanding of how the brain works for over half a century ... we still think about brains in the way our scientific grandparents did." Setting aside semantic debates about what constitutes a "major conceptual innovation", brain researchers are clearly working on a large number of ideas that their grandparents had not thought of. But what are those, exactly?

A big thanks to @kendmiller + for helping the #neuroscience community get its hashtag act together!

neuromatch.social/@kendmiller@

For neuro paper threads: sigmoid.social/about/more has already claimed #PaperThread and #NewPaper (the latter announcing a paper without a thread) for the AI community. I enjoy seeing their papers too, but we need a distinct tag for neuro papers. For a thread, ... maybe we want something simple like #NeuroPaperThread and #NeuroNewPaper?

Great idea Ken - let's do this.

➡️​​EVERYONE: BOOKMARK THIS! ⬅️​ and follow those hashtags. And then don't hold back. After all, it's what we all show up here for: to hear what you've figured out and learn from you.

Qoto MastodonKen Miller (@kendmiller@qoto.org)For neuro paper threads: sigmoid.social/about/more has already claimed #PaperThread and #NewPaper (the latter announcing a paper without a thread) for the AI community. I enjoy seeing their papers too, but we need a distinct tag for neuro papers. For a thread, somebody suggested a #TootSuite, and I had suggested a #MastoPiece, but maybe we want something simple like #NeuroPaperThread and #NeuroNewPaper? or mix and match, like #NeuroSuite for a thread and #NeuroPaper for a paper w/o a thread?? Decisions, decisions. I nominate @NicoleCRust@neuromatch.social to be the hashtag czar. Decide for us Nicole! (And anyone else, weigh in to help inform our czar!) And from your lips, or at least typing fingers, to the Mastodon's ears ... @toddhorowitz@fediscience.org @LeonDLotter@neuromatch.social @phdstudents@a.gup.pe @academicchatter@a.gup.pe @neuroscience@a.gup.pe @cognition@a.gup.pe @NicoleCRust@neuromatch.social #neuroscience

#introduction : I am a #neuroscientist studying #3dvision in freely moving humans (lots of 3D researchers study static humans or freely moving rats). I am excited by the #cerebellum (which is computionally huge but hard to study electrophysiologically), less so by the #hippocampus (which has been studied in immense detail but may, in the end, have a more minor computational role). I follow #NeuroAI as #machinelearning and #reinforcementlearning may solve the brain, en passant, before we do.

The bittersweet lesson: data-rich models narrow the behavioural gap to human vision (Geirhos et al): jov.arvojournals.org/article.a

The questions raised at the end of this VSS abstract are interesting and quite important for #NeuroAI research.

"In the light of these findings, it is hard to avoid drawing parallels to the "bitter lesson" formulated by Rich Sutton, who argued that "building in how we think we think does not work in the long run" "

"Should we, perhaps, worry less about biologically faithful implementations and more about the algorithmic similarities between human and machine vision induced by training on large-scale datasets?"

Also see Geirhos et al NeurIPS 2021 for a more detailed version of the findings mentioned in the VSS abstract: proceedings.neurips.cc/paper/2

jov.arvojournals.org The bittersweet lesson: data-rich models narrow the behavioural gap to human vision | JOV | ARVO Journals