lingo.lol is one of the many independent Mastodon servers you can use to participate in the fediverse.
A place for linguists, philologists, and other lovers of languages.

Server stats:

68
active users

#auditory

0 posts0 participants0 posts today
The vOICe vision BCI 🧠🇪🇺<p>Auditory cortex learns to discriminate audiovisual cues through selective multisensory enhancement (in rats) <a href="https://elifesciences.org/articles/102926" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">elifesciences.org/articles/102</span><span class="invisible">926</span></a> "multisensory perceptual learning actively engages <a href="https://mas.to/tags/auditory" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>auditory</span></a> cortex (AC) neurons in both <a href="https://mas.to/tags/visual" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>visual</span></a> and <a href="https://mas.to/tags/audiovisual" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>audiovisual</span></a> processing"; <a href="https://mas.to/tags/multisensory" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>multisensory</span></a> <a href="https://mas.to/tags/neuroscience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>neuroscience</span></a></p>
The Global Voice<p>🕟Z <a href="https://mstdn.social/tags/NowPlaying" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NowPlaying</span></a> On the half-hour the 2nd repeat of our half hour monthly feature. this week it's Odds and Sods with Shawn Klein. A show featuring interesting things and curiosities Shawn has found on the Internet, touching on a variety of subjects. This month <a href="https://mstdn.social/tags/Auditory" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Auditory</span></a>#Illusions <a href="https://theglobalvoice.info:8443/broadband" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">theglobalvoice.info:8443/broad</span><span class="invisible">band</span></a> <a href="https://mstdn.social/tags/TGVRadio" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TGVRadio</span></a> <a href="https://mstdn.social/tags/science" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>science</span></a> <a href="https://mstdn.social/tags/music" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>music</span></a> <a href="https://mstdn.social/tags/history" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>history</span></a> <a href="https://mstdn.social/tags/sound" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>sound</span></a> 🎤👂🔁</p>
The vOICe vision BCI 🧠🇪🇺<p>Hierarchical encoding of natural sounds mixtures in ferret auditory cortex <a href="https://www.biorxiv.org/content/10.1101/2025.02.15.637892v2" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">biorxiv.org/content/10.1101/20</span><span class="invisible">25.02.15.637892v2</span></a> spectrotemporal filter-bank model, higher-order mechanisms, <a href="https://mas.to/tags/auditory" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>auditory</span></a> <a href="https://mas.to/tags/neuroscience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>neuroscience</span></a></p>
The vOICe vision BCI 🧠🇪🇺<p>Neurons in the inferior colliculus use multiplexing to encode features of frequency-modulated sweeps (in mice) <a href="https://www.biorxiv.org/content/10.1101/2025.02.10.637492v1" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">biorxiv.org/content/10.1101/20</span><span class="invisible">25.02.10.637492v1</span></a> <a href="https://mas.to/tags/auditory" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>auditory</span></a> <a href="https://mas.to/tags/neuroscience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>neuroscience</span></a></p>
The vOICe vision BCI 🧠🇪🇺<p>Duration adaptation depends on the perceived rather than physical duration and can be observed across sensory modalities <a href="https://journals.sagepub.com/doi/10.1177/03010066251314184" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">journals.sagepub.com/doi/10.11</span><span class="invisible">77/03010066251314184</span></a> "duration adaptation relies on perceived duration and can occur across sensory modalities"; <a href="https://mas.to/tags/crossmodal" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>crossmodal</span></a> <a href="https://mas.to/tags/multisensory" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>multisensory</span></a> <a href="https://mas.to/tags/temporal" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>temporal</span></a> <a href="https://mas.to/tags/perception" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>perception</span></a></p><p>"adapting to a subjectively matched <a href="https://mas.to/tags/visual" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>visual</span></a> stimulus produced a significant aftereffect when the test stimulus was <a href="https://mas.to/tags/auditory" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>auditory</span></a>, indicating the existence of the cross-modal adaptation."</p>
The vOICe vision BCI 🧠🇪🇺<p>Distinct cortical populations drive <a href="https://mas.to/tags/multisensory" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>multisensory</span></a> modulation of segregated <a href="https://mas.to/tags/auditory" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>auditory</span></a> sources <a href="https://www.biorxiv.org/content/10.1101/2024.12.23.630079v1" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">biorxiv.org/content/10.1101/20</span><span class="invisible">24.12.23.630079v1</span></a></p>
LSP-ENS<p>We are the *Laboratoire des Systèmes Perceptifs*, a research unit located at the Ecole Normale Supérieure in Paris and attached to <span class="h-card" translate="no"><a href="https://social.numerique.gouv.fr/@cnrs" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>cnrs</span></a></span>. We are interested in <a href="https://fediscience.org/tags/visual" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>visual</span></a> and <a href="https://fediscience.org/tags/auditory" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>auditory</span></a> perception, from behavioural, computational, and neural perspectives. <a href="https://fediscience.org/tags/intro" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>intro</span></a> <a href="https://fediscience.org/tags/introduction" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>introduction</span></a> <br><span class="h-card" translate="no"><a href="https://a.gup.pe/u/psychology" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>psychology</span></a></span> <a href="https://fediscience.org/tags/psychophysics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>psychophysics</span></a></p>
Stefanie Kuchinsky<p>New perspectives paper w/ Erick Gallun and KC Lee!<br>We discuss known inconsistencies in the dual-task listening effort literature and suggest to move forward, we must first look backward: better integrating models of resource capacity/allocation and task-switching from domains outside hearing research. <br><a href="https://fediscience.org/tags/auditory" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>auditory</span></a> <a href="https://fediscience.org/tags/hearing" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>hearing</span></a> <a href="https://fediscience.org/tags/listeningeffort" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>listeningeffort</span></a> <a href="https://fediscience.org/tags/dual" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>dual</span></a>-task <a href="https://fediscience.org/tags/multitasking" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>multitasking</span></a> <a href="https://fediscience.org/tags/cognitivepsychology" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cognitivepsychology</span></a> <a href="https://fediscience.org/tags/attention" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>attention</span></a> <br><a href="https://journals.sagepub.com/doi/10.1177/23312165241292215" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">journals.sagepub.com/doi/10.11</span><span class="invisible">77/23312165241292215</span></a></p>
PLOS Biology<p>Infants have impressive <a href="https://fediscience.org/tags/auditory" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>auditory</span></a> learning capabilities, even at day 1. Study shows that <a href="https://fediscience.org/tags/newborns" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>newborns</span></a> &amp; 6–mo-olds already learn &amp; detect <a href="https://fediscience.org/tags/grammar" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>grammar</span></a>-like rules; the underlying <a href="https://fediscience.org/tags/BrainNetworks" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>BrainNetworks</span></a> reorganize to be more adult-like after the first half-year <a href="https://fediscience.org/tags/PLOSBiology" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PLOSBiology</span></a> <a href="https://plos.io/3UiaGMg" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">plos.io/3UiaGMg</span><span class="invisible"></span></a></p>
jonny (good kind)<p>This is pretty cool - reptile found that can sense low frequency sound with the saccule: <a href="https://doi.org/10.1016/j.cub.2024.09.016" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">doi.org/10.1016/j.cub.2024.09.</span><span class="invisible">016</span></a><br>(I'll edit with a direct PDF link in a sec)</p><p>Hearing evolved in fishes, where the swim bladder as a big resonant cavity reached out to touch the vestibular organ and kinda vibrate it. That is only good for low frequencies, so to some degree the history of the evolution of audition has been a quest for higher frequencies - thinning out a tympanic membrane, the evolution of the inner ear by stealing jawbones, the enlargement of the brain case to close off the middle ears (our eustachian tubes are vestigial remnants of what used to be an "open passageway" from ear to ear). </p><p>Sound is a veridical readout of the matter that produces it, so different frequency ranges contain different kinds of information, and small things including textures and material composition are only audible with higher frequency ranges. Low freqs are important too, but especially with the transition to land, needing to handle the impedance mismatch between fluid filled bodies and open air makes an organ that can hear a wide range of frequencies challenging.</p><p>So the cochlea gets all the attention as the auditory organ because its one of the most remarkably precise and Scientifically Magical organs out there, but the vestibular system is cool too. It's basically a bag of saltwater and rocks and when you jangle your head around the rocks touch little hair cells and tell you you're moving. </p><p>Because of its torrid history the auditory system is sort of a clusterfuck, but these researchers found direct projections from the Saccule through to the auditory midbrain. They're sensitive to vibration (through a surface), not sound (through the air), but still go to auditory system, so while we have no idea what the perceptual reality is like, i dont think it is unfair to say that the geckos "hear vibration."</p><p><a href="https://neuromatch.social/tags/Audition" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Audition</span></a> <a href="https://neuromatch.social/tags/Auditory" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Auditory</span></a> <a href="https://neuromatch.social/tags/AuditoryNeuroscience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AuditoryNeuroscience</span></a> <a href="https://neuromatch.social/tags/Neuroscience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Neuroscience</span></a></p>
Joel Snyder<p>🚨🚨🚨Please spread the word about this awesome sounding program for Undergrads doing auditory research, with support for attending the ARO conference and summer funds for research 🤑🤑🤑! <a href="https://aro.org/aro-scholars-program/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">aro.org/aro-scholars-program/</span><span class="invisible"></span></a></p><p><a href="https://neuromatch.social/tags/science" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>science</span></a> <a href="https://neuromatch.social/tags/neuroscience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>neuroscience</span></a> <a href="https://neuromatch.social/tags/auditory" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>auditory</span></a> <a href="https://neuromatch.social/tags/hearing" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>hearing</span></a> <a href="https://neuromatch.social/tags/cochlearimplants" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cochlearimplants</span></a> <a href="https://neuromatch.social/tags/deafness" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>deafness</span></a> <a href="https://neuromatch.social/tags/STEM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>STEM</span></a></p>
T. T. Perry<p>Amplitude fluctuations in a masker influence lexical segmentation in cochlear implant users<br>Perry and Kwon, 2015</p><p>"CI listeners showing little or no masking release are not reliably segregating speech from competing sounds, further suggesting that one challenge faced by CI users listening in noisy environments is a reduction of reliable segmentation cues."</p><p><a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4417024/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">ncbi.nlm.nih.gov/pmc/articles/</span><span class="invisible">PMC4417024/</span></a></p><p><a href="https://mastodon.social/tags/Speech" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Speech</span></a> <a href="https://mastodon.social/tags/Linguistics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Linguistics</span></a> <a href="https://mastodon.social/tags/CochlearImplant" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CochlearImplant</span></a> <a href="https://mastodon.social/tags/Auditory" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Auditory</span></a> <a href="https://mastodon.social/tags/Science" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Science</span></a></p>
T. T. Perry<p>Neural encoding of linguistic speech cues is unaffected by cognitive decline, but decreases with increasing hearing impairment<br>Bolt &amp; Giroud, 2024, Sci. Reports</p><p>"These results suggest that while speech processing markers remain unaffected by cognitive decline and hearing loss per se, neural encoding of word-level segmented speech features in older adults is affected by hearing loss but not by cognitive decline."</p><p><a href="https://www.nature.com/articles/s41598-024-69602-1" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">nature.com/articles/s41598-024</span><span class="invisible">-69602-1</span></a></p><p><a href="https://mastodon.social/tags/Speech" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Speech</span></a> <a href="https://mastodon.social/tags/Linguistics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Linguistics</span></a> <a href="https://mastodon.social/tags/HearingLoss" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HearingLoss</span></a> <a href="https://mastodon.social/tags/Auditory" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Auditory</span></a> <a href="https://mastodon.social/tags/Cognition" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Cognition</span></a></p>
jonny (good kind)<p>Samuel mehr and the music lab ppl sent this experiment out to the auditory listserv re: human vs AI-generated music that is both fun and goes in a direction I didnt expect. These folks do some of the few crowdsourced research projects I really like, so im curious what they're after here:</p><p><a href="https://www.themusiclab.org/quizzes/dafi" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="">themusiclab.org/quizzes/dafi</span><span class="invisible"></span></a></p><p><a href="https://neuromatch.social/tags/neuroscience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>neuroscience</span></a> <a href="https://neuromatch.social/tags/auditory" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>auditory</span></a></p>
Stefanie Kuchinsky<p>Our chapter “Listening difficulty: From hearing to language” is out in the book series The Psychology of Learning and Motivation! We highlight often overlooked connections between cognitive hearing science and psycholinguistics and make recommendations for moving the fields forward together. </p><p><a href="https://fediscience.org/tags/hearing" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>hearing</span></a> <a href="https://fediscience.org/tags/auditory" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>auditory</span></a> <a href="https://fediscience.org/tags/language" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>language</span></a> <a href="https://fediscience.org/tags/cognitivehearingscience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cognitivehearingscience</span></a> <a href="https://fediscience.org/tags/psycholinguistics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>psycholinguistics</span></a> <a href="https://fediscience.org/tags/speechinnoise" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>speechinnoise</span></a> <a href="https://fediscience.org/tags/listeningeffort" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>listeningeffort</span></a> <a href="https://fediscience.org/tags/bilingualism" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>bilingualism</span></a> <a href="https://fediscience.org/tags/individualdifferences" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>individualdifferences</span></a></p><p><a href="https://authors.elsevier.com/a/1jbdyI8Pe%7EOHl" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">authors.elsevier.com/a/1jbdyI8</span><span class="invisible">Pe%7EOHl</span></a></p>
PLOS Biology<p>One's emotional response to music can change with one's <a href="https://fediscience.org/tags/cognitive" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cognitive</span></a> state. @mokazuma @zatorrelab reveal that pleasure from <a href="https://fediscience.org/tags/music" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>music</span></a> is in part determined by the state of resting <a href="https://fediscience.org/tags/auditory" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>auditory</span></a>-<a href="https://fediscience.org/tags/reward" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>reward</span></a> brain networks prior to listening <a href="https://fediscience.org/tags/PLOSBiology" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PLOSBiology</span></a> <a href="https://plos.io/3AvzPfq" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">plos.io/3AvzPfq</span><span class="invisible"></span></a></p>
T. T. Perry<p><span class="h-card" translate="no"><a href="https://mstdn.social/@erictopol" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>erictopol</span></a></span> Eh, this is not particularly persuasive on it's own: </p><p>"Secondly, due to data restrictions, this study did not include audiological data such as pure tone thresholds or speech audiometry of the diagnosed patients."</p><p>I don't put a great deal of trust into studies that rely on restrospective analysis of diagnostic codes.</p><p><a href="https://mastodon.social/tags/Auditory" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Auditory</span></a> <a href="https://mastodon.social/tags/Audiology" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Audiology</span></a> <a href="https://mastodon.social/tags/COVID" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>COVID</span></a> <a href="https://mastodon.social/tags/Science" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Science</span></a> <a href="https://mastodon.social/tags/HearingLoss" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HearingLoss</span></a></p>
Dan Goodman<p>New preprint on our "collaborative modelling of the brain" (COMOB) project. Over the last two years, a group of us (led by <span class="h-card" translate="no"><a href="https://neuromatch.social/@marcusghosh" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>marcusghosh</span></a></span>) have been working together, openly, online, with anyone free to join, on a computational neuroscience research project</p><p><a href="https://www.biorxiv.org/content/10.1101/2024.07.19.604252v1" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">biorxiv.org/content/10.1101/20</span><span class="invisible">24.07.19.604252v1</span></a></p><p>This was an experiment in a more bottom up, collaborative way of doing science, rather than the hierarchical PI-led model. So how did we do it?</p><p>We started from the tutorial I gave at <span class="h-card" translate="no"><a href="https://neuromatch.social/@CosyneMeeting" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>CosyneMeeting</span></a></span> 2022 on spiking neural networks that included a starter Jupyter notebook that let you train a spiking neural network model on a sound localisation task.</p><p><a href="https://neural-reckoning.github.io/cosyne-tutorial-2022/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">neural-reckoning.github.io/cos</span><span class="invisible">yne-tutorial-2022/</span></a></p><p><a href="https://www.youtube.com/watch?v=GTXTQ_sOxak&amp;list=PL09WqqDbQWHGJd7Il3yVxiBts5nRSxvJ4&amp;index=1" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">youtube.com/watch?v=GTXTQ_sOxa</span><span class="invisible">k&amp;list=PL09WqqDbQWHGJd7Il3yVxiBts5nRSxvJ4&amp;index=1</span></a></p><p>Participants were free to use and adapt this to any question they were interested in (we gave some ideas for starting points, but there was no constraint). Participants worked in groups or individually, sharing their work on our repository and joining us for monthly meetings. </p><p>The repository was set up to automatically build a website using <span class="h-card" translate="no"><a href="https://fosstodon.org/@mystmarkdown" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>mystmarkdown</span></a></span> showing the current work in progress of all projects, and (later in the project) the paper as we wrote it. This kept everyone up to date with what was going on.</p><p><a href="https://comob-project.github.io/snn-sound-localization/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">comob-project.github.io/snn-so</span><span class="invisible">und-localization/</span></a></p><p>We started from a simple feedforward network of leaky integrate-and-fire neurons, but others adapted it to include learnable delays, alternative neuron models, biophysically detailed models, incorporated Dale's law, etc.</p><p>We found some interesting results, including that shorter time constants improved performance (consistent with what we see in the auditory system). Surprisingly, the network seemed to be using an "equalisation cancellation" strategy rather than the expected coincidence detection.</p><p>Ultimately, our scientific results were not incredibly strong, but we think this was a valuable experiment for a number of reasons. Firstly, it shows that there are other ways of doing science. Secondly, many people got to engage in a research experience they otherwise wouldn't. Several participants have been motivated to continue their work beyond this project. It also proved useful for generating teaching material, and a number of MSc projects were based on it.</p><p>With that said, we learned some lessons about how to do this better, and yes, we will be doing this again (call for participation in September/October hopefully). The main challenge will be to keep the project more focussed without making it top down / hierarchical.</p><p>We believe this is possible, and we are inspired by the recent success of the Busy Beaver challenge, a bottom up project of mathematics amateurs that found a proof to a 40 year old conjecture.</p><p><a href="https://www.quantamagazine.org/amateur-mathematicians-find-fifth-busy-beaver-turing-machine-20240702/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">quantamagazine.org/amateur-mat</span><span class="invisible">hematicians-find-fifth-busy-beaver-turing-machine-20240702/</span></a></p><p>We will be calling for proposals for the next project, engaging in an open discussion with all participants to refine the ideas before starting, and then inviting the proposer of the most popular project to act as a 'project lead' keeping it focussed without being hierarchical.</p><p>If you're interested in being involved in that, please join our (currently fairly quiet) new discord server, or follow me or <span class="h-card" translate="no"><a href="https://neuromatch.social/@marcusghosh" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>marcusghosh</span></a></span> for announcements.</p><p><a href="https://discord.gg/kUzh5MHjVE" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">discord.gg/kUzh5MHjVE</span><span class="invisible"></span></a></p><p>I'm excited for a future where scientists work more collaboratively, and where everyone can participate. Diversity will lead to exciting new ideas and progress. Computational science has huge potential here, something we're also pursuing at <span class="h-card" translate="no"><a href="https://neuromatch.social/@neuromatch" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>neuromatch</span></a></span>.</p><p>Let's make it happen!</p><p><a href="https://neuromatch.social/tags/neuroscience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>neuroscience</span></a> <a href="https://neuromatch.social/tags/computationalscience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>computationalscience</span></a> <a href="https://neuromatch.social/tags/computationalneuroscience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>computationalneuroscience</span></a> <a href="https://neuromatch.social/tags/compneuro" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>compneuro</span></a> <a href="https://neuromatch.social/tags/science" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>science</span></a> <a href="https://neuromatch.social/tags/metascience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>metascience</span></a> <a href="https://neuromatch.social/tags/SpikingNeuralNetworks" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SpikingNeuralNetworks</span></a> <a href="https://neuromatch.social/tags/auditory" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>auditory</span></a></p>
Joel Snyder<p>Research by Gary Lupyan on individual differences in our inner voices </p><p><a href="https://www.wbur.org/hereandnow/2024/07/19/inner-voice" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">wbur.org/hereandnow/2024/07/19</span><span class="invisible">/inner-voice</span></a></p><p><a href="https://neuromatch.social/tags/neuroscience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>neuroscience</span></a> <a href="https://neuromatch.social/tags/psychology" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>psychology</span></a> <a href="https://neuromatch.social/tags/voice" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>voice</span></a> <a href="https://neuromatch.social/tags/speech" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>speech</span></a> <a href="https://neuromatch.social/tags/auditory" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>auditory</span></a> <a href="https://neuromatch.social/tags/imagery" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>imagery</span></a></p>
T. T. Perry<p>Research I contributed to is being presented today at the Meeting of the Acoustical Society of America in Ottawa. </p><p>"A Spatial Digit Task for assessing binaural function in individuals with hearing loss"</p><p>Brungart, Davidson, Clark, and Perry, 2024.</p><p>"the SDT proves valuable for identifying individuals struggling using ITD cues to segregate and localize simultaneously-presented speech signals."</p><p><a href="https://mastodon.social/tags/Auditory" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Auditory</span></a> <a href="https://mastodon.social/tags/Hearing" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Hearing</span></a> <a href="https://mastodon.social/tags/Acoustics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Acoustics</span></a> <a href="https://mastodon.social/tags/Audiology" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Audiology</span></a> <a href="https://mastodon.social/tags/science" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>science</span></a></p>