lingo.lol is one of the many independent Mastodon servers you can use to participate in the fediverse.
A place for linguists, philologists, and other lovers of languages.

Server stats:

70
active users

#LLMs

18 posts12 participants0 posts today
AI6YR Ben<p>All perfectly normal in the world... 😬 </p><p>WaPo: Tinder lets you flirt with AI characters. Three of them dumped me.</p><p><a href="https://m.ai6yr.org/tags/ai" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ai</span></a> <a href="https://m.ai6yr.org/tags/llms" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>llms</span></a> <a href="https://m.ai6yr.org/tags/dontdaterobots" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>dontdaterobots</span></a></p>
Will Berard 🫳🎤🫶<p>Posting another <a href="https://mastodon.acm.org/tags/Introduction" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Introduction</span></a> - plz boost far and/or wide! </p><p><a href="https://mastodon.acm.org/tags/French" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>French</span></a>-Born, <a href="https://mastodon.acm.org/tags/London" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>London</span></a>-Based CompSci Teacher/Education PhD</p><p><a href="https://mastodon.acm.org/tags/Education" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Education</span></a> <a href="https://mastodon.acm.org/tags/Research" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Research</span></a> <a href="https://mastodon.acm.org/tags/Phd" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Phd</span></a>, <a href="https://mastodon.acm.org/tags/BCS" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>BCS</span></a> <a href="https://mastodon.acm.org/tags/Computing" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Computing</span></a> <a href="https://mastodon.acm.org/tags/Teacher" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Teacher</span></a> <a href="https://mastodon.acm.org/tags/CCT" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CCT</span></a><br><a href="https://mastodon.acm.org/tags/CSEd" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CSEd</span></a> <a href="https://mastodon.acm.org/tags/Programming" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Programming</span></a> <a href="https://mastodon.acm.org/tags/BCS" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>BCS</span></a><br><a href="https://mastodon.acm.org/tags/ActuallyAutistic" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ActuallyAutistic</span></a><br><a href="https://mastodon.acm.org/tags/ActuallyADHD" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ActuallyADHD</span></a><br>I live with <a href="https://mastodon.acm.org/tags/MultipleSclerosis" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MultipleSclerosis</span></a> <br><a href="https://mastodon.acm.org/tags/Zen" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Zen</span></a> / <a href="https://mastodon.acm.org/tags/Nonduality" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Nonduality</span></a> <a href="https://mastodon.acm.org/tags/Buddhist" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Buddhist</span></a>, weirdly into <a href="https://mastodon.acm.org/tags/Jung" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Jung</span></a><br><a href="https://mastodon.acm.org/tags/Research" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Research</span></a> topics:<br>- <a href="https://mastodon.acm.org/tags/EdAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>EdAI</span></a> / <a href="https://mastodon.acm.org/tags/AIEd" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AIEd</span></a> - <a href="https://mastodon.acm.org/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> in <a href="https://mastodon.acm.org/tags/Education" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Education</span></a><br>- <a href="https://mastodon.acm.org/tags/CriticalStudies" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CriticalStudies</span></a> of <a href="https://mastodon.acm.org/tags/EdTech" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>EdTech</span></a><br>- <a href="https://mastodon.acm.org/tags/Neurodiversity" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Neurodiversity</span></a> in <a href="https://mastodon.acm.org/tags/Education" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Education</span></a>, and the experience of ND educators.</p>
Holle Meding<p>📚 Extracting Citations with LLMs</p><p>At the <a href="https://mastodon.social/tags/LLM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLM</span></a> for HPSS workshop, <span class="h-card" translate="no"><a href="https://sciences.social/@cmboulanger" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>cmboulanger</span></a></span> David Carreto Fidalgo &amp; Andreas Wagner presented LLaMore: a Python tool for extracting citation data from unstructured legal &amp; humanities texts using <a href="https://mastodon.social/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> </p><p>Unlike GROBID, LLaMore handles complex footnotes and free-form references. Early results with GPT-4o and Llama 3.3 show significantly higher accuracy when benchmarked against a new gold standard TEI-annotated dataset.</p><p><a href="https://mastodon.social/tags/TEI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>TEI</span></a> <a href="https://mastodon.social/tags/openscience" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>openscience</span></a> <span class="h-card" translate="no"><a href="https://wisskomm.social/@maxplanckgesellschaft" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>maxplanckgesellschaft</span></a></span></p>
José A. Alonso<p>LeanSolver: Solving theorems through large language models and search. ~ Avi Luciano Halevy <a href="https://repository.tudelft.nl/file/File_a98b6c93-4017-42c5-bb8f-df68da0d7034" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">repository.tudelft.nl/file/Fil</span><span class="invisible">e_a98b6c93-4017-42c5-bb8f-df68da0d7034</span></a> <a href="https://mathstodon.xyz/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> <a href="https://mathstodon.xyz/tags/ITP" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ITP</span></a> <a href="https://mathstodon.xyz/tags/LeanProver" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LeanProver</span></a></p>
José A. Alonso<p>Readings shared April 2, 2025. <a href="https://jaalonso.github.io/vestigium/posts/2025/04/02-readings_shared_04-02-25" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">jaalonso.github.io/vestigium/p</span><span class="invisible">osts/2025/04/02-readings_shared_04-02-25</span></a> <a href="https://mathstodon.xyz/tags/FunctionalProgramming" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>FunctionalProgramming</span></a> <a href="https://mathstodon.xyz/tags/HOL4" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>HOL4</span></a> <a href="https://mathstodon.xyz/tags/Haskell" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Haskell</span></a> <a href="https://mathstodon.xyz/tags/ITP" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ITP</span></a> <a href="https://mathstodon.xyz/tags/IsabelleHOL" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>IsabelleHOL</span></a> <a href="https://mathstodon.xyz/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> <a href="https://mathstodon.xyz/tags/LeanProver" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LeanProver</span></a> <a href="https://mathstodon.xyz/tags/Logic" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Logic</span></a> <a href="https://mathstodon.xyz/tags/LogicProgramming" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LogicProgramming</span></a> <a href="https://mathstodon.xyz/tags/Math" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Math</span></a> <a href="https://mathstodon.xyz/tags/Mathlib" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Mathlib</span></a> <a href="https://mathstodon.xyz/tags/Prolog" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Prolog</span></a></p>
Holle Meding<p>📰 Classifying Genre in Historical Medical Periodicals</p><p>Next in line: Vera Danilova presents her work on genre classification in digitized periodicals from European patient organizations (1951–1990) using <a href="https://mastodon.social/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> as part of the <a href="https://mastodon.social/tags/ActDisease" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ActDisease</span></a> project. </p><p>🔹 XLM-RoBERTa (UDM) led Q&amp;A tasks with 32% more correct answers than mBERT/hmBERT.<br>🔹 hmBERT (UDM) topped Administrative classification (+16%)<br>🔹 CORE-based models excelled in legal genre prediction. </p><p><a href="https://mastodon.social/tags/DigitalHumanities" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DigitalHumanities</span></a> <span class="h-card" translate="no"><a href="https://wisskomm.social/@tuberlin" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>tuberlin</span></a></span> <a href="https://mastodon.social/tags/classification" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>classification</span></a> <a href="https://mastodon.social/tags/NLP" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>NLP</span></a></p>
Holle Meding<p>🔍 Large-Scale Text Analysis &amp; Cultural Change </p><p>In their talk at the workshop “Large Language Models for the HPSS” <span class="h-card" translate="no"><a href="https://wisskomm.social/@tuberlin" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>tuberlin</span></a></span> Pierluigi Cassotti and Nina Tahmasebi presented a multi-method approach to studying cultural and societal change through large-scale text analysis. </p><p>By combining close reading with computational techniques, including but not limited to <a href="https://mastodon.social/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> , they demonstrate how diverse tools can be integrated to uncover shifts in language. <a href="https://mastodon.social/tags/DigitalHumanities" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DigitalHumanities</span></a></p>
José A. Alonso<p>Proof or bluff? Evaluating LLMs on 2025 USA math olympiad. ~ Ivo Petrov et als. <a href="https://arxiv.org/abs/2503.21934" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">arxiv.org/abs/2503.21934</span><span class="invisible"></span></a> <a href="https://mathstodon.xyz/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> <a href="https://mathstodon.xyz/tags/Math" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Math</span></a></p>
José A. Alonso<p>Readings shared April 1, 2025. <a href="https://jaalonso.github.io/vestigium/posts/2025/04/01-readings_shared_04-01-25" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">jaalonso.github.io/vestigium/p</span><span class="invisible">osts/2025/04/01-readings_shared_04-01-25</span></a> <a href="https://mathstodon.xyz/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://mathstodon.xyz/tags/Haskell" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Haskell</span></a> <a href="https://mathstodon.xyz/tags/ITP" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ITP</span></a> <a href="https://mathstodon.xyz/tags/IsabelleHOL" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>IsabelleHOL</span></a> <a href="https://mathstodon.xyz/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> <a href="https://mathstodon.xyz/tags/LeanProver" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LeanProver</span></a> <a href="https://mathstodon.xyz/tags/Logic" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Logic</span></a> <a href="https://mathstodon.xyz/tags/LogicProgramming" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LogicProgramming</span></a> <a href="https://mathstodon.xyz/tags/Math" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Math</span></a> <a href="https://mathstodon.xyz/tags/Prolog" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Prolog</span></a> <a href="https://mathstodon.xyz/tags/SMT" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>SMT</span></a> <a href="https://mathstodon.xyz/tags/Z3" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Z3</span></a></p>
Bibliolater 📚 📜 🖋<p>🔴 💻 **Are chatbots reliable text annotators? Sometimes**</p><p>“_Given the unreliable performance of ChatGPT and the significant challenges it poses to Open Science, we advise caution when using ChatGPT for substantive text annotation tasks._”</p><p>Ross Deans Kristensen-McLachlan, Miceal Canavan, Marton Kárdos, Mia Jacobsen, Lene Aarøe, Are chatbots reliable text annotators? Sometimes, PNAS Nexus, Volume 4, Issue 4, April 2025, pgaf069, <a href="https://doi.org/10.1093/pnasnexus/pgaf069" rel="nofollow noopener noreferrer" target="_blank"><span class="invisible">https://</span><span class="ellipsis">doi.org/10.1093/pnasnexus/pgaf</span><span class="invisible">069</span></a>. </p><p><a href="https://qoto.org/tags/OpenAccess" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OpenAccess</span></a> <a href="https://qoto.org/tags/OA" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OA</span></a> <a href="https://qoto.org/tags/Article" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Article</span></a> <a href="https://qoto.org/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://qoto.org/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ArtificialIntelligence</span></a> <a href="https://qoto.org/tags/LargeLanguageModels" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LargeLanguageModels</span></a> <a href="https://qoto.org/tags/LLMS" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMS</span></a> <a href="https://qoto.org/tags/Chatbots" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Chatbots</span></a> <a href="https://qoto.org/tags/Technology" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Technology</span></a> <a href="https://qoto.org/tags/Tech" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Tech</span></a> <a href="https://qoto.org/tags/Data" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Data</span></a> <a href="https://qoto.org/tags/Annotation" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Annotation</span></a> <a href="https://qoto.org/tags/Academia" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Academia</span></a> <a href="https://qoto.org/tags/Academics" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Academics</span></a> <span class="h-card"><a href="https://a.gup.pe/u/ai" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>ai</span></a></span></p>
Leshem Choshen<p>🚨 BREAKING: <br>@TheAcornAI<br> just dropped the first <br>test-time learning pretrained model! 🚀</p><p>It learns on the fly, interacts<br>adapts to you, and outsmarts anything before it.<br> Oh, and it's OPEN. 👀🔓</p><p>The future just got smarter. <br><a href="https://sigmoid.social/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://sigmoid.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MachineLearning</span></a> <a href="https://sigmoid.social/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a><br>🤖📈</p>
Antonio Lieto<p>Happy birthday to Cognitive Design for Artificial Minds (<a href="https://lnkd.in/gZtzwDn3" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">lnkd.in/gZtzwDn3</span><span class="invisible"></span></a>) that was released 4 years ago!</p><p>Since then its ideas have been presented and discussed widely in the research fields of AI/Cognitive Science/Robotics and - nowadays - both the possibilities and the limitations of: <a href="https://fediscience.org/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a>, <a href="https://fediscience.org/tags/GenerativeAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GenerativeAI</span></a> and <a href="https://fediscience.org/tags/ReinforcementLearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ReinforcementLearning</span></a> (already envisioned and discussed in the book) have become a common topic of research interests in the AI community and beyond. <br>Similarly also the topic concerning the evaluation - in human-like and human-level terms - of the current AI systems has become a critical theme related to the problem Anthropomorphic interpretation of AI output (see e.g. <a href="https://lnkd.in/dVi9Qf_k" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">lnkd.in/dVi9Qf_k</span><span class="invisible"></span></a> ). <br>Book reviews have been published on ACM Computing Reviews (2021) <a href="https://lnkd.in/dWQpJdkV" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">lnkd.in/dWQpJdkV</span><span class="invisible"></span></a> and on Argumenta (2023): <a href="https://lnkd.in/derH3VKN" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">lnkd.in/derH3VKN</span><span class="invisible"></span></a></p><p>I have been invited to present the content of the book in over 20 official scientific events in international conferences, Ph.D Schools in US, China, Japan, Finland, Germany, Sweden, France, Brazil, Poland, Austria and, of course, Italy. </p><p>A news I am happy to share is that Routledge/Taylor &amp; Francis contacted me few weeks ago for a second edition! Stay tuned!</p><p>The <a href="https://fediscience.org/tags/book" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>book</span></a> is available in many webstores:<br>- Routledge: <a href="https://lnkd.in/dPrC26p" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">lnkd.in/dPrC26p</span><span class="invisible"></span></a><br>- Taylor &amp; Francis: <a href="https://lnkd.in/dprVF2w" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">lnkd.in/dprVF2w</span><span class="invisible"></span></a><br>- Amazon: <a href="https://lnkd.in/dC8rEzPi" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">lnkd.in/dC8rEzPi</span><span class="invisible"></span></a></p><p><span class="h-card" translate="no"><a href="https://a.gup.pe/u/academicchatter" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>academicchatter</span></a></span> <span class="h-card" translate="no"><a href="https://a.gup.pe/u/cognition" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>cognition</span></a></span> <br><a href="https://fediscience.org/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://fediscience.org/tags/minimalcognitivegrid" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>minimalcognitivegrid</span></a> <a href="https://fediscience.org/tags/CognitiveAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CognitiveAI</span></a> <a href="https://fediscience.org/tags/cognitivescience" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>cognitivescience</span></a> <a href="https://fediscience.org/tags/cognitivesystems" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>cognitivesystems</span></a></p>
José A. Alonso<p>STP: Self-play LLM theorem provers with iterative conjecturing and proving. ~ Kefan Dong, Tengyu Ma. <a href="https://arxiv.org/abs/2502.00212" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">arxiv.org/abs/2502.00212</span><span class="invisible"></span></a> <a href="https://mathstodon.xyz/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://mathstodon.xyz/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> <a href="https://mathstodon.xyz/tags/ITP" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ITP</span></a> <a href="https://mathstodon.xyz/tags/LeanProver" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LeanProver</span></a></p>
José A. Alonso<p>The cultural divide between mathematics and AI (A reflection on cultural differences observed at the 2025 Joint Mathematics Meeting). ~ Ralph Furman. <a href="https://sugaku.net/content/understanding-the-cultural-divide-between-mathematics-and-ai/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">sugaku.net/content/understandi</span><span class="invisible">ng-the-cultural-divide-between-mathematics-and-ai/</span></a> <a href="https://mathstodon.xyz/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://mathstodon.xyz/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> <a href="https://mathstodon.xyz/tags/Math" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Math</span></a></p>
José A. Alonso<p>The disconnect between AI benchmarks and math research (Evaluating AI systems on their ability to be a mathematical copilot). ~ Ralph Furman. <a href="https://sugaku.net/content/ai-benchmarks-vs-real-math-research/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">sugaku.net/content/ai-benchmar</span><span class="invisible">ks-vs-real-math-research/</span></a> <a href="https://mathstodon.xyz/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://mathstodon.xyz/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> <a href="https://mathstodon.xyz/tags/Math" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Math</span></a></p>
Kathy Reid<p>ICYMI: I'll be talking at the Melbourne <a href="https://aus.social/tags/ML" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ML</span></a> and <a href="https://aus.social/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> Meetup in a couple weeks' time about the <a href="https://aus.social/tags/TokenWars" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>TokenWars</span></a> - the conflict for data to train LLMs and the fight by IP rights holders to protect their data from scrapers. </p><p>Come learn about how <a href="https://aus.social/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> are trained on huge volumes of tokens with transformers, why those tokens are becoming more economically valuable, and what you can do to protect your token treasure. </p><p>You'll never look at ChatGPT or data the same way again. </p><p>Huge thanks to <span class="h-card" translate="no"><a href="https://mastodon.social/@jonoxer" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>jonoxer</span></a></span> for the recommend, and to Lizzie Silver for the behind the scenes wrangling.</p><p><a href="https://www.meetup.com/machine-learning-ai-meetup/events/306548300" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">meetup.com/machine-learning-ai</span><span class="invisible">-meetup/events/306548300</span></a></p>
Lianna (on Mastodon)<p>I entered the <a href="https://mastodon.gamedev.place/tags/ComputationalLinguistics" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ComputationalLinguistics</span></a> field in 2018 by enrolling for a Bachelor's degree.</p><p>Since then, a lot has changed. Almost all the things we learned about, programmed in practice and did research on are now nearly irrelevant in our day-to-day.</p><p>Everything is <a href="https://mastodon.gamedev.place/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> now. Every paper, every course, every student project.</p><p>And the newly enrolled students changed, too. They're no longer language nerds, they're <a href="https://mastodon.gamedev.place/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> bros.</p><p>I miss <a href="https://mastodon.gamedev.place/tags/CompLing" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CompLing</span></a> before ChatGPT.</p><p><a href="https://mastodon.gamedev.place/tags/academia" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>academia</span></a> <a href="https://mastodon.gamedev.place/tags/science" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>science</span></a> <a href="https://mastodon.gamedev.place/tags/linguistics" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>linguistics</span></a> <a href="https://mastodon.gamedev.place/tags/llm" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>llm</span></a></p>
Ulrike Hahn<p>Join us for our next CCCM "The Cognitive Science of Generative AI" seminar:</p><p>"Foundation models of human cognition"<br>Marcel Binz,<br>Helmholtz, Munich <br>Tuesday, April 1st, 16;00 BST, online</p><p>registration: <a href="https://psyc.bbk.ac.uk/cccm/cccm-seminar-series/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">psyc.bbk.ac.uk/cccm/cccm-semin</span><span class="invisible">ar-series/</span></a></p><p>Abstract: Most cognitive models are domain-specific, meaning that their scope is restricted to a single type of problem. The human mind, on the other hand, does not work like this – it is a unified system whose processes are deeply intertwined. In this talk, I will present my ongoing work on foundation models of human cognition: models that cannot only predict behavior in a single domain but that instead offer a truly universal take on our mind. Furthermore, I outline my vision for how to use such behaviorally predictive models to advance our understanding of human cognition, as well as how they can be scaled to naturalistic environments.</p><p><a href="https://fediscience.org/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> <a href="https://fediscience.org/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <span class="h-card" translate="no"><a href="https://a.gup.pe/u/cogsci" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>cogsci</span></a></span> <span class="h-card" translate="no"><a href="https://a.gup.pe/u/philosophy" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>philosophy</span></a></span></p>
Bibliolater 📚 📜 🖋<p>🔴 💻 **“Turning right”? An experimental study on the political value shift in large language models**</p><p>“_Our findings reveal that while newer versions of ChatGPT consistently maintain values within the libertarian-left quadrant, there is a statistically significant rightward shift in political values over time, a phenomenon we term a ‘value shift’ in large language models._”</p><p>Liu, Y., Panwang, Y. &amp; Gu, C. “Turning right”? An experimental study on the political value shift in large language models. Humanit Soc Sci Commun 12, 179 (2025). <a href="https://doi.org/10.1057/s41599-025-04465-z" rel="nofollow noopener noreferrer" target="_blank"><span class="invisible">https://</span><span class="ellipsis">doi.org/10.1057/s41599-025-044</span><span class="invisible">65-z</span></a>. </p><p><a href="https://qoto.org/tags/OpenAccess" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OpenAccess</span></a> <a href="https://qoto.org/tags/OA" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OA</span></a> <a href="https://qoto.org/tags/Article" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Article</span></a> <a href="https://qoto.org/tags/DOI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DOI</span></a> <a href="https://qoto.org/tags/Rightwing" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Rightwing</span></a> <a href="https://qoto.org/tags/Politics" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Politics</span></a> <a href="https://qoto.org/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://qoto.org/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ArtificialIntelligence</span></a> <a href="https://qoto.org/tags/Technology" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Technology</span></a> <a href="https://qoto.org/tags/Tech" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Tech</span></a> <a href="https://qoto.org/tags/LLMS" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMS</span></a> <a href="https://qoto.org/tags/Academia" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Academia</span></a> <a href="https://qoto.org/tags/Academics" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Academics</span></a> <span class="h-card"><a href="https://a.gup.pe/u/ai" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>ai</span></a></span></p>
Paco Hope #resist<p>If anybody out there is working on using <a href="https://infosec.exchange/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> or <a href="https://infosec.exchange/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> to analyze <a href="https://infosec.exchange/tags/security" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>security</span></a> events in AWS, I wonder if you're considering bullshit attacks via event injection. Let me explain. I'm openly musing about something I don't know much about.</p><p>You might be tempted to pipe a lot of EventBridge events into some kind of AI that analyzes them looking for suspicious events. Or you might hook up to CloudWatch log streams and read log entries from, say, your lambda functions looking for suspicious errors and output.</p><p>LLMs are going to be terrible at validating message authenticity. If you have a lambda that is doing something totally innocuous, but you make it <code>print()</code> some JSON that looks just like a GuardDuty finding, that JSON will end up in the lambda function's CloudWatch log stream. Then if you're piping CloudWatch Logs into an LLM, I don't think it will be smart enough to say "wait a minute, why is JSON that looks like a GuardDuty finding being emitted by this lambda function on its stdout?"</p><p>You and I would say "that's really weird. That JSON shouldn't be here in this log stream. Let's go look at what that lambda function is doing and why it's doing that." (Oh, it's Paco and he's just fucking with me) I think an LLM is far more likey to react "<em>Holy shit! there's a really terrible GuardDuty finding!</em> Light up the pagers! Red Alert!"</p><p>Having said this, I'm <strong>not</strong> doing this myself. I don't have any of my <a href="https://infosec.exchange/tags/AWS" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AWS</span></a> logging streaming into any kind of <a href="https://infosec.exchange/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a>. So maybe it's better than I think it is. But LLMs are notoriously bad at ignoring anything in their input stream. They tend to take it all at face value and treat it all as legit.</p><p>You might even try this with your <a href="https://infosec.exchange/tags/SIEM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>SIEM</span></a> . Is it smart enough to ignore things that show up in the wrong context? Could you emit the JSON of an AWS security event in, say, a Windows Server Event Log that goes to your SIEM? Would it react as if that was a legit event? If you don't even use AWS, wouldn't it be funny if your SIEM responds to this JSON as if it was a big deal?</p><p>I'm just pondering this, and I'll credit the source: I'm evaluating an internal bedrock-based threat modelling tool and it spit out the phrase "EventBridge Event Injection." I thought "<strong>oh shit</strong> that's a whole class of issues I haven't thought about."</p>