lingo.lol is one of the many independent Mastodon servers you can use to participate in the fediverse.
A place for linguists, philologists, and other lovers of languages.

Server stats:

64
active users

#computationalneuroscience

1 post1 participant0 posts today

At Neuromatch Academy & Climatematch Academy, we’re not just running courses. Neuromatch is investing in the next generation of computational scientists, changemakers, & interdisciplinary thinkers.

As part of this mission, we offer Professional Development sessions that give our students & TAs real-world tools and insight before the coursework begins.

🤓Want to get involved with Neuromatch? Join our mailing list: neuromatch.io/mailing-list/

A few weeks ago, I shared a differential equations tutorial for beginners, written from the perspective of a neuroscientist who's had to grapple with the computational part. Following up on that, I've now tackled the first real beast encountered by most computational neuroscience students: the Hodgkin-Huxley model.

While remaining incredibly elegant to this day, this model is also a mathematically dense system of equations that can overwhelm and discourage beginners, especially those with non-mathematical backgrounds. Similar to the first tutorial, I've tried to build intuition step-by-step, starting with a simple RC circuit, layering in Na⁺ and K⁺ channels, and ending with the full spike-generation story.

Feedback is welcome, especially from fellow non-math converts.
neurofrontiers.blog/building-a

#ComputationalNeuroscience #Python #hodgkinHuxleyModel #math #biophysics

From: @neurofrontiers
neuromatch.social/@neurofronti

Neurofrontiers · Building a virtual neuron - part 2 - Neurofrontiers
More from neuronerd

How do babies and blind people learn to localise sound without labelled data? We propose that innate mechanisms can provide coarse-grained error signals to boostrap learning.

New preprint from @yang_chu.

arxiv.org/abs/2001.10605

Thread below 👇

arXiv.orgLearning spatial hearing via innate mechanismsThe acoustic cues used by humans and other animals to localise sounds are subtle, and change during and after development. This means that we need to constantly relearn or recalibrate the auditory spatial map throughout our lifetimes. This is often thought of as a "supervised" learning process where a "teacher" (for example, a parent, or your visual system) tells you whether or not you guessed the location correctly, and you use this information to update your map. However, there is not always an obvious teacher (for example in babies or blind people). Using computational models, we showed that approximate feedback from a simple innate circuit, such as that can distinguish left from right (e.g. the auditory orienting response), is sufficient to learn an accurate full-range spatial auditory map. Moreover, using this mechanism in addition to supervised learning can more robustly maintain the adaptive neural representation. We find several possible neural mechanisms that could underlie this type of learning, and hypothesise that multiple mechanisms may be present and interact with each other. We conclude that when studying spatial hearing, we should not assume that the only source of learning is from the visual system or other supervisory signal. Further study of the proposed mechanisms could allow us to design better rehabilitation programmes to accelerate relearning/recalibration of spatial maps.

When I transitioned from cognitive to computational neuroscience, I found myself in a bit of a bind. I had learned calculus, but I had progressed little beyond pattern recognition: I knew which rules to apply to find solutions to which equations, but the equations themselves lacked any sort of real meaning for me.

So I struggled with understanding how formulas could be implemented in code and why the code I was reading could be described by those formulas. Resources explaining math “for neuroscientists” were unfortunately quite useless for me, because they usually presented the necessary equations for describing various neural systems, assuming the presence of that basic understanding/intuition I lacked.

Of course, I figured things out eventually (otherwise I wouldn’t be writing about it), but I’m 85% sure I’m not the only one who’s ever struggled with this, and so I wrote the tutorial I wish I could’ve had. If you’re in a similar position, I hope you’ll find it useful. And if not, maybe it helps you get a glimpse into the struggles of the non-math people in your life. Either way, it has cats.

neurofrontiers.blog/building-a

Neurofrontiers · Building a virtual neuron - part 1 - Neurofrontiers
More from neuronerd

#NeuroML is participating in #GSoC2025 again this year under @INCF . We're looking for people with some experience of #ComputationalNeuroscience to work on developing #standardised biophysically detailed computational models using #NeuroML #PyNN and #OpenSourceBrain.

Please spread the word, especially to students interested in modelling. We will help them learn the NeuroML ecosystem so they can use its standardised pipeline in their work.

docs.neuroml.org/NeuroMLOrg/Ou

CC #AcademicChatter

docs.neuroml.orgOutreach and training — NeuroML Documentation

I'm giving an online talk starting in 15m (as part of UCL's NeuroAI series).

It's on neural architectures and our current line of research trying to figure out what they might be good for (including some philosophy: what might an answer to this question even look like?).

Sign up (free) at this link to get the zoom link:

eventbrite.co.uk/e/ucl-neuroai

EventbriteUCL NeuroAI Talk SeriesA series of NeuroAI themed talks organised by the UCL NeuroAI community. Talks will continue on a monthly basis.

With the current situation in the #US, several of my former colleagues there are looking for a #PostDocJob in #Europe, to do #BehaviouralNeuroscience or #ComputationalNeuroscience in #SpatialCognition (or adjacent).
Lots of hashtags I know..

Do you know a #EU or #UK #Neuroscience lab looking to hire a postdoc in these fields? Let me know and I'll pass it on to them!

Edit: adding #RodentResearch and #humanresearch for the species concerned (in this case)

Come along to my (free, online) UCL NeuroAI talk next week on neural architectures. What are they good for? All will finally be revealed and you'll never have to think about that question again afterwards. Yep. Definitely that.

🗓️ Wed 12 Feb 2025
⏰ 2-3pm GMT
ℹ️ Details and registration: eventbrite.co.uk/e/ucl-neuroai

EventbriteUCL NeuroAI Talk SeriesA series of NeuroAI themed talks organised by the UCL NeuroAI community. Talks will continue on a monthly basis.

🚀 Neuromatch Academy 2025 is coming! 🚀

📢 Key Dates:
📍 Feb 24 – Applications Open!
📍 Mar 23 – Deadline (Midnight in your timezone)
📍 Mid-April – Decisions Announced
📍 Early May – Enrollment Deadline

Join a global community & dive into Computational Neuroscience, Deep Learning, Comp Tools for Climate Science, or NeuroAI! Don’t miss out—apply & tag a friend! 🌍✨

We are very happy to provide a consolidated update on the #NeuroML ecosystem in our @eLife paper, “The NeuroML ecosystem for standardized multi-scale modeling in neuroscience”: doi.org/10.7554/eLife.95135.3

#NeuroML is a standard and software ecosystem for data-driven biophysically detailed #ComputationalModelling endorsed by the @INCF and CoMBINE, and includes a large community of users and software developers.

#Neuroscience #ComputationalNeuroscience #ComputationalModelling 1/x

What's the right way to think about modularity in the brain? This devilish 😈 question is a big part of my research now, and it started with this paper with @GabrielBena finally published after the first preprint in 2021!

nature.com/articles/s41467-024

We know the brain is physically structured into distinct areas ("modules"?). We also know that some of these have specialised function. But is there a necessary connection between these two statements? What is the relationship - if any - between 'structural' and 'functional' modularity?

TLDR if you don't want to read the rest: there is no necessary relationship between the two, although when resources are tight, functional modularity is more likely to arise when there's structural modularity. We also found that functional modularity can change over time! Longer version follows.

NatureDynamics of specialization in neural modules under resource constraints - Nature CommunicationsThe extent to which structural modularity in neural networks ensures functional specialization remains unclear. Here the authors show that specialization can emerge in neural modules placed under resource constraints but varies dynamically and is influenced by network architecture and information flow.

New preprint! With Swathi Anil and @marcusghosh.

If you want to get the most out of a multisensory signal, you should take it's temporal structure into account. But which neural architectures do this best? 🧵👇

biorxiv.org/content/10.1101/20

bioRxiv · Fusing multisensory signals across channels and timeAnimals continuously combine information across sensory modalities and time, and use these combined signals to guide their behaviour. Picture a predator watching their prey sprint and screech through a field. To date, a range of multisensory algorithms have been proposed to model this process including linear and nonlinear fusion, which combine the inputs from multiple sensory channels via either a sum or nonlinear function. However, many multisensory algorithms treat successive observations independently, and so cannot leverage the temporal structure inherent to naturalistic stimuli. To investigate this, we introduce a novel multisensory task in which we provide the same number of task-relevant signals per trial but vary how this information is presented: from many short bursts to a few long sequences. We demonstrate that multisensory algorithms that treat different time steps as independent, perform sub-optimally on this task. However, simply augmenting these algorithms to integrate across sensory channels and short temporal windows allows them to perform surprisingly well, and comparably to fully recurrent neural networks. Overall, our work: highlights the benefits of fusing multisensory information across channels and time, shows that small increases in circuit/model complexity can lead to significant gains in performance, and provides a novel multisensory task for testing the relevance of this in biological systems. Key Points ### Competing Interest Statement The authors have declared no competing interest.

Please share these 2 PhD positions in #ComputationalNeuroscience co-supervised by Dr. Elisa Massi #ETIS Lab -- CY Cergy Paris Université (France):

They both start from September 2025, but the application deadline is on the 30th of January 2025

  1. SMaRT-RL: Stress & Hippocampal Replay in Reinforcement Learning: dim-cbrains.fr/en/phd-program/

  2. SYSNEMEHISPIN: Memory Formation with Spiking Neural Models: dim-cbrains.fr/en/phd-program/

Interested candidates can apply after their registration on the DIM C-BRAINS platform: dim-cbrains.fr/en/platform/log

These are funded for 3 years, the normal PhD duration in France.

These scholarships aim at enhancing international mobility and are reserved for candidates that have recently studied or worked outside France.

dim-cbrains.frDIM C-BRAINSCognition and Brain Revolutions: Artificial Intelligence, Neurogenomics, Society

In the spirit of end-of-year celebrations, we've just released Brian 2.8 🌠
Alongside the usual helping of small improvements and fixes, this release comes with an important performance improvement for random number generation in C++ standalone mode.

Head to the release notes for more details! brian2.readthedocs.io/en/2.8.0

brian2.readthedocs.ioRelease notes — Brian 2 2.8.0 documentation