lingo.lol is one of the many independent Mastodon servers you can use to participate in the fediverse.
A place for linguists, philologists, and other lovers of languages.

Server stats:

68
active users

#neuromorphic

0 posts0 participants0 posts today

The remarkable energy efficiency of the Human brain: One #Spike Every 6 Seconds !

In the groundbreaking paper "The Cost of Cortical Computation" published in 2003 in Current Biology, neuroscientist Peter Lennie reached a stunning conclusion about neural activity in the human brain: the average firing rate of cortical neurons is approximately 0.16 Hz—equivalent to just one spike every 6 seconds.

This finding challenges conventional assumptions about neural activity and reveals the extraordinary energy efficiency of the brain's computational strategy. Unconventional? Ask a LLM about it, and it will rather point to a baseline frequency between 0.1Hz and 10Hz. Pretty high and vague, right? But how did Lennie arrive at this remarkable figure?

The Calculation Behind the 0.16 Hz Baseline Rate

Lennie's analysis combines several critical factors:

1. Energy Constraints Analysis

Starting with the brain's known energy consumption (approximately 20% of the body's entire energy budget despite being only 2% of body weight), Lennie worked backward to determine how many action potentials this energy could reasonably support.

2. Precise Metabolic Costs

His calculations incorporated detailed metabolic requirements:

  • Each action potential consumes approximately 3.84 × 109 ATP molecules
  • The human brain uses about 5.7 × 1021 ATP molecules daily

3. Neural Architecture

The analysis factored in essential neuroanatomical data:

  • The human cerebral cortex contains roughly 1010 neurons
  • Each neuron forms approximately 104 synaptic connections

4. Metabolic Distribution

Using cerebral glucose utilization measurements from PET studies, Lennie accounted for energy allocation across different neural processes:

  • Maintaining resting membrane potentials
  • Generating action potentials
  • Powering synaptic transmission

By synthesizing these factors and dividing the available energy budget by the number of neurons and the energy cost per spike, Lennie calculated that cortical neurons can only sustain an average firing rate of approximately 0.16 Hz while remaining within the brain's metabolic constraints.

Implications for Neural Coding

This extremely low firing rate has profound implications for our understanding of neural computation. It suggests that:

  1. Neural coding must be remarkably sparse — information in the brain is likely represented by the activity of relatively few neurons at any given moment
  2. Energy efficiency has shaped brain evolution — metabolic constraints have driven the development of computational strategies that maximize information processing while minimizing energy use
  3. Low baseline rates enable selective amplification — this sparse background activity creates a context where meaningful signals can be effectively amplified

The brain's solution to energy constraints reveals an elegant approach to computation: doing more with less through strategic sparsity rather than constant activity.

This perspective on neural efficiency continues to influence our understanding of brain function and inspires energy-efficient approaches to #ArtificialNeuralNetworks and #neuromorphic computing.

I've been hosting live hacking hours in the open #neuromorphic community every Monday for the past few months, fixing bugs and closing issues.
I'm considering livestreaming the hacking hours and inviting some experts to code along with me. Would you watch that?
Is there anything in particular you would like us to fix?

#Neuromorphic #computing just got more accessible! Our work on a Neuromorphic Intermediate Representation (NIR) is out in @SpringerNature Communications. We demonstrate interoperability with 11 platforms. And more to come!

nature.com/articles/s41467-024

NIR is a data format that is understood by 7 software libraries and 4 hardware platforms. Before NIR, models from one framework could not transfer to another. Now, we can develop software and hardware independently.

A 🧵 1/3

NatureNeuromorphic intermediate representation: A unified instruction set for interoperable brain-inspired computing - Nature CommunicationsNeuromorphic software and hardware solutions vary widely, challenging interoperability and reproducibility. Here, authors establish a representation for neuromorphic computations in continuous time and demonstrate support across 11 platforms.

#newpaper in the next issue of "Neural Networks" !

"A robust event-driven approach to always-on object recognition"

by @antoine_grimaldi , Victor Boutin, Sio-Hoi Ieng, Ryad Benosman and myself - available #opensource at laurentperrinet.github.io/publ

Main contributions:

  • Builds an adaptive, event-based #neuromorphic pattern recognition architecture inspired by neuroscience and capable of always-on decision, i.e. the decision can be made whenever it is needed - just like most living systems!
Continued thread

To highlight Yulia Sandarsmirskaya's talk:

youtube.com/watch?v=R6fbkvA6yH

"brains do something useful and something unique. So we want to learn from these brains. And it's good to constrain ourselves to really focus on this problem and not just any kind of computer ... so we want to get inspired, but also from all those small brains, inside flies, ... "

"So the first thing is neurons. ... what's important about neurons firing is that we have asynchronous computing when it's needed. So when there's information worth passing over, then we pass it over. There's no clock, no continuous sampling. So we have this sparse information going through, and this is the key."

SPIKING NEURAL NETWORKS!

If you love them, join us at SNUFA24. Free, online workshop, Nov 5-6 (2-6pm CET). Usually ~700 participants.

Invited speakers: Chiara Bartolozzi, David Kappel, Anna Levina, Christian Machens

Posters + 8 contributed talks selected by participant vote.

Abstract submission is quick and easy (300 word max), and now open until the deadline Sept 27.

Registration is free, but mandatory.

Hope to see you there!

snufa.net/2024/

Dear colleagues,

It's a pleasure to share with you this fully-funded #PhD position in #computational neuroscience in interaction with #neuromorphic engineering and #neuroscience:

laurentperrinet.github.io/post

TL;DR: This PhD subject focuses on the association between #attention and #SpikingNeuralNetworks for defining new efficient AI models for embedded systems such as drones, robots and more generally autonomous systems. The thesis will take place between the LEAT research lab in Sophia-Antipolis and the INT institute in Marseille which both develop complementary approaches on bio-inspired AI from neuroscience to embedded systems design.

The application should include :
• Curriculum vitæ,

• Motivation Letter,

• Letter of recommendation of the master supervisor.

and sent to Benoit Miramond benoit.miramond@unice.fr, Laurent Perrinet Laurent.Perrinet@univ-amu.fr, and Laurent Rodriguez laurent.rodriguez@univ-cotedazur.fr

Cheers,
Laurent

PS: related references:

  • Emmanuel Daucé, Pierre Albigès, Laurent U Perrinet (2020). A dual foveal-peripheral visual processing model implements efficient saccade selection. Journal of Vision. doi: doi.org/10.1167/jov.20.8.22

  • Jean-Nicolas Jérémie, Emmanuel Daucé, Laurent U Perrinet (2024). Retinotopic Mapping Enhances the Robustness of Convolutional Neural Networks. arXiv: arxiv.org/abs/2402.15480