lingo.lol is one of the many independent Mastodon servers you can use to participate in the fediverse.
A place for linguists, philologists, and other lovers of languages.

Server stats:

54
active users

#LLMs

44 posts33 participants1 post today
Tom Stafford<p>Newsletter, my latest installment in the "how to think about the new AI models" series on Reasonable People</p><p><a href="https://open.substack.com/pub/tomstafford/p/large-language-models-and-the-amazon" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">open.substack.com/pub/tomstaff</span><span class="invisible">ord/p/large-language-models-and-the-amazon</span></a></p><p><a href="https://mastodon.online/tags/LLMs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLMs</span></a> <a href="https://mastodon.online/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://mastodon.online/tags/CogSci" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CogSci</span></a></p>
Martin Hamilton, DECT:8080@WHY<p>I'll be talking about <a href="https://martinh.net/tags/LLMs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLMs</span></a> and <a href="https://martinh.net/tags/GenAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GenAI</span></a> at <a href="https://martinh.net/tags/WHY2025" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>WHY2025</span></a>, stage Andromeda on Tuesday at 11am CEST. Come along or watch online, and don't forget to bring your <a href="https://martinh.net/tags/Tetrapod" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Tetrapod</span></a>! :why2025: :tetrapod:</p><p><a href="https://martinh.net/tags/ProjectCarryall" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ProjectCarryall</span></a> <a href="https://martinh.net/tags/ProjectPlowshare" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ProjectPlowshare</span></a> <a href="https://martinh.net/tags/ClumsyMetaphor" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ClumsyMetaphor</span></a> <a href="https://martinh.net/tags/SearchClub" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SearchClub</span></a></p>
Sharon Machlis<p>Plus ($20/mo.) ChatGPT users can choose to use GPT-4o and not be forced to use GPT-5 without testing it first. Go to Settings &gt; General and switch "Show legacy models" on. <br>Not available for Free users, at least yet.</p><p>I'll try to make this it for GPT-5 posts today 😅<br><a href="https://masto.machlis.com/tags/GenAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GenAI</span></a> <a href="https://masto.machlis.com/tags/ChatGPT" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ChatGPT</span></a> <a href="https://masto.machlis.com/tags/LLMs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLMs</span></a> <a href="https://masto.machlis.com/tags/GPT5" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPT5</span></a></p>
Sharon Machlis<p>Trying to upload a file with an .R extension in the ChatGPT GPT-5 Web interface throws an error. In case I was wondering how interested OpenAI is in supporting <a href="https://masto.machlis.com/tags/RStats" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>RStats</span></a> programming.<br><a href="https://masto.machlis.com/tags/GenAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GenAI</span></a> <a href="https://masto.machlis.com/tags/LLMs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLMs</span></a></p>
Philo Sophies<p><a href="https://earthstream.social/tags/Zoomposium" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Zoomposium</span></a> with Dr. <a href="https://earthstream.social/tags/Gabriele" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Gabriele</span></a> <a href="https://earthstream.social/tags/Scheler" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Scheler</span></a>: “The <a href="https://earthstream.social/tags/language" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>language</span></a> of the <a href="https://earthstream.social/tags/brain" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>brain</span></a> - or how <a href="https://earthstream.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> can learn from <a href="https://earthstream.social/tags/biological" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>biological</span></a> <a href="https://earthstream.social/tags/language" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>language</span></a> <a href="https://earthstream.social/tags/models" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>models</span></a>”</p><p>There is a <a href="https://earthstream.social/tags/paradigmshift" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>paradigmshift</span></a> away from the purely information-technological-mechanistic, purely data-driven <a href="https://earthstream.social/tags/Big" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Big</span></a> <a href="https://earthstream.social/tags/Data" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Data</span></a> concept of <a href="https://earthstream.social/tags/LLMs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLMs</span></a> towards increasingly information-biological-polycontextural, structure-driven <a href="https://earthstream.social/tags/artificial" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>artificial</span></a>, <a href="https://earthstream.social/tags/neural" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>neural</span></a> <a href="https://earthstream.social/tags/networks" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>networks</span></a> (<a href="https://earthstream.social/tags/KNN" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>KNN</span></a>) concepts.</p><p>More at: <a href="https://philosophies.de/index.php/2024/11/18/sprache-des-gehirns/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">philosophies.de/index.php/2024</span><span class="invisible">/11/18/sprache-des-gehirns/</span></a></p><p>or: <a href="https://youtu.be/forOGk8k0W8" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">youtu.be/forOGk8k0W8</span><span class="invisible"></span></a></p>
Philo Sophies<p><a href="https://earthstream.social/tags/Zoomposium" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Zoomposium</span></a> mit Dr. <a href="https://earthstream.social/tags/Gabriele" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Gabriele</span></a> <a href="https://earthstream.social/tags/Scheler" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Scheler</span></a>: "Die <a href="https://earthstream.social/tags/Sprache" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Sprache</span></a> des <a href="https://earthstream.social/tags/Gehirns" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Gehirns</span></a> - oder wie <a href="https://earthstream.social/tags/KI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>KI</span></a> von <a href="https://earthstream.social/tags/biologischen" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>biologischen</span></a> <a href="https://earthstream.social/tags/Sprachmodellen" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Sprachmodellen</span></a> lernen kann" </p><p>Es gibt einen <a href="https://earthstream.social/tags/Paradigmenwechsel" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Paradigmenwechsel</span></a> weg vom rein informationstechnologischen-mechanistischen, rein daten-getriebenen <a href="https://earthstream.social/tags/Big" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Big</span></a> <a href="https://earthstream.social/tags/Data" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Data</span></a>-Konzept der <a href="https://earthstream.social/tags/LLMs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLMs</span></a> hin zu immer stärker informationsbiologische-polykontexturalen, struktur-getriebenen <a href="https://earthstream.social/tags/K%C3%BCnstliche" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Künstliche</span></a>, <a href="https://earthstream.social/tags/Neuronale" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Neuronale</span></a> <a href="https://earthstream.social/tags/Netzwerke" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Netzwerke</span></a> (<a href="https://earthstream.social/tags/KNN" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>KNN</span></a>)-Konzepten. </p><p>Mehr auf: <a href="https://philosophies.de/index.php/2024/11/18/sprache-des-gehirns/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">philosophies.de/index.php/2024</span><span class="invisible">/11/18/sprache-des-gehirns/</span></a></p><p>oder: <a href="https://youtu.be/forOGk8k0W8" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">youtu.be/forOGk8k0W8</span><span class="invisible"></span></a></p>
Sharon Machlis<p>My early impressions of the ChatGPT Web UI with GPT-5 are pretty negative for an <a href="https://masto.machlis.com/tags/RStats" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>RStats</span></a> project I’m working on. Code that doesn’t work, not understanding context &amp; follow-up questions. Am guessing I was routed to the less capable mini or nano models at times.<br>I still like Claude Opus 4.1, but I bump up against Web limits quickly. Google Gemini 2.5 Pro is promising with a lot of context and instructions. Its context window is 5X larger than Opus.<br><a href="https://masto.machlis.com/tags/GenAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GenAI</span></a> <a href="https://masto.machlis.com/tags/LLMs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLMs</span></a></p>
Mark Carrigan<p><strong>The gap between student GenAI use and the support students are&nbsp;offered</strong></p><p>I argued a couple of days ago that the <a href="https://markcarrigan.net/2025/08/08/are-uk-universities-ready-to-cope-with-generative-ai-in-the-25-26-academic-year/" rel="nofollow noopener" target="_blank">sector is unprepared</a> for our first academic year where the use of generative AI is completely normalised amongst students. HEPI found <a href="https://www.hepi.ac.uk/2025/02/26/student-generative-ai-survey-2025/" rel="nofollow noopener" target="_blank">92% of undergraduates</a> using LLMs this year, up from 66% the previous year, which matches AdvancedHE’s finding of 62% using AI in their studies “in a way that is allowed by their university” (<em>huge caveat</em>). This largely accords with my own experience in which it appeared that last year LLMs become mainstream amongst students and this year they it to become a near uniform phenomenon. </p><p>The problem arises from the gap between near uniform use of LLMs <em>in some way</em> and the the lack of support being offered. Only 36% of students in the HEPI survey said they had been offered support by their university: <strong>a 56% gap</strong>. Only 26% of students say their university provides access to AI tools: <strong>a 66% gap</strong>. This is particularly problematic because we have evidence that wealthier students are tending to use LLMs more and in more analytical and reflective ways. They are more likely to use LLMs in a way that supports rather than hinders learning. </p><p>How do we close that gap between student LLM use and the support students are offered? My concern is that centralised training is either going to tend towards banality or irrelevance because the objective of GenAI training for students needs to be <em>how to learn with LLMs rather than outsource learning to them</em>. There are general principles which can be offered here but the concrete questions which have to be answered for students are going to vary between disciplinary areas: </p><ul><li>What are students in our discipline using AI for, which tools, at what stages of their work?</li><li>Which foundational skills and ways of thinking in our discipline are enhanced vs threatened by AI use?</li><li>When does AI use shift from “learning with” to “outsourcing learning” in our specific field?</li><li>What forms of assessment still make sense and what new approaches do we need in an AI-saturated environment?</li><li>What discipline-specific scaffolding helps students use AI as a thinking partner rather than a thinking replacement?</li></ul><p>Furthermore answering these questions is a <em>process </em>taking place in relating to changes in the technology and the culture emerging around it. Even if those changes are now slowing down, they are certainly not stopping. We need infrastructure for continuous adaptation in a context where the sector is <em>already </em>in crisis for entirely unrelated reasons. Furthermore, that has to willingly enrol academics in a way consistent with their workload and outlook. My sense is we have to find ways of embedding this within existing conversations and processes. The only way to do this I think is to genuinely give academics voice within the process, finding ways to network existing interactions in order that norms and standards emerge from practice rather than the institution expecting practice adapts to another centrally imposed policy. </p><p></p><p><a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://markcarrigan.net/tag/higher-education-2/" target="_blank">#higherEducation</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://markcarrigan.net/tag/technology/" target="_blank">#technology</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://markcarrigan.net/tag/university/" target="_blank">#university</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://markcarrigan.net/tag/academic/" target="_blank">#academic</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://markcarrigan.net/tag/students/" target="_blank">#students</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://markcarrigan.net/tag/generative-ai/" target="_blank">#generativeAI</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://markcarrigan.net/tag/malpractice/" target="_blank">#malpractice</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://markcarrigan.net/tag/llms/" target="_blank">#LLMs</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://markcarrigan.net/tag/hepi/" target="_blank">#HEPI</a></p>
Simon Brooke<p><span class="h-card" translate="no"><a href="https://aus.social/@Platform_Journalism" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>Platform_Journalism</span></a></span> This ain't rocket science. Current generation <a href="https://mastodon.scot/tags/LLMs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLMs</span></a> have no semantic layer: they have no model of the world, and no concept of truth. When they are right, it's entirely by accident.</p><p>They generate sequences of tokens. The sequences of tokens they generate are probable, given the statistical distribution of similar sequences in texts they have ingested. But there is no concept of meaning there.</p><p>People use them in research because they also don't care about truth.</p>
José A. Alonso<p>Readings shared August 9, 2025. <a href="https://jaalonso.github.io/vestigium/posts/2025/08/10-readings_shared_08-09-25" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">jaalonso.github.io/vestigium/p</span><span class="invisible">osts/2025/08/10-readings_shared_08-09-25</span></a> <a href="https://mathstodon.xyz/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://mathstodon.xyz/tags/FunctionalProgramming" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>FunctionalProgramming</span></a> <a href="https://mathstodon.xyz/tags/Haskell" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Haskell</span></a> <a href="https://mathstodon.xyz/tags/IMO" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>IMO</span></a> <a href="https://mathstodon.xyz/tags/ITP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ITP</span></a> <a href="https://mathstodon.xyz/tags/IsabelleHOL" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>IsabelleHOL</span></a> <a href="https://mathstodon.xyz/tags/LLMs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLMs</span></a> <a href="https://mathstodon.xyz/tags/LeanProver" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LeanProver</span></a> <a href="https://mathstodon.xyz/tags/LogicProgramming" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LogicProgramming</span></a> <a href="https://mathstodon.xyz/tags/Math" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Math</span></a> <a href="https://mathstodon.xyz/tags/Prolog" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Prolog</span></a></p>
janhoglund<p>”The ability of LLMs to produce “fluent nonsense”—plausible but logically flawed reasoning chains—can be more deceptive and damaging than an outright incorrect answer, as it projects a false aura of dependability. Sufficient auditing from domain experts is indispensable.<br>…the core issue…[is] the model’s lack of abstract reasoning capability.”<br>—C. Zhao, Z. Tan, P. Ma, D. Li, B. Jiang, Y. Wang, Y. Yang and H. Liu, Is Chain-of-Thought Reasoning of LLMs a Mirage?<br><a href="https://arxiv.org/pdf/2508.01191" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">arxiv.org/pdf/2508.01191</span><span class="invisible"></span></a><br><a href="https://mastodon.nu/tags/llm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>llm</span></a> <a href="https://mastodon.nu/tags/llms" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>llms</span></a></p>
FLOSS.social :mastodon_oops:<p>💡 Unlike other <a href="https://floss.social/tags/Fediverse" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Fediverse</span></a> servers, we didn't need to "wait and see" before preventing <a href="https://floss.social/tags/Meta" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Meta</span></a> from using our community's content to train their <a href="https://floss.social/tags/LLMs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLMs</span></a>. When corporations show you who they are, believe them.</p><p><a href="https://floss.social/tags/Threads" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Threads</span></a> <a href="https://floss.social/tags/Facebook" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Facebook</span></a> <a href="https://floss.social/tags/FediPact" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>FediPact</span></a> </p><p><a href="https://www.dropsitenews.com/p/meta-facebook-tech-copyright-privacy-whistleblower" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">dropsitenews.com/p/meta-facebo</span><span class="invisible">ok-tech-copyright-privacy-whistleblower</span></a></p>
Jürgen<p>Pff. It took ‘some’ time for people to realize that the emperor wears no clothes… 😂</p><p><a href="https://mastodon.nl/tags/ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ai</span></a> <a href="https://mastodon.nl/tags/altman" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>altman</span></a> <a href="https://mastodon.nl/tags/openai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>openai</span></a> <a href="https://mastodon.nl/tags/chatgpt" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>chatgpt</span></a> <a href="https://mastodon.nl/tags/tech" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>tech</span></a> <a href="https://mastodon.nl/tags/technology" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>technology</span></a> <a href="https://mastodon.nl/tags/hype" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>hype</span></a> <a href="https://mastodon.nl/tags/llm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>llm</span></a> <a href="https://mastodon.nl/tags/llms" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>llms</span></a></p><p><a href="https://venturebeat.com/ai/openai-returns-old-models-to-chatgpt-as-sam-altman-admits-bumpy-gpt-5-rollout/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">venturebeat.com/ai/openai-retu</span><span class="invisible">rns-old-models-to-chatgpt-as-sam-altman-admits-bumpy-gpt-5-rollout/</span></a></p>
José A. Alonso<p>Geoint-R1: Formalizing multimodal geometric reasoning with dynamic auxiliary constructions. ~ Jingxuan Wei et als. <a href="https://arxiv.org/abs/2508.03173v1" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">arxiv.org/abs/2508.03173v1</span><span class="invisible"></span></a> <a href="https://mathstodon.xyz/tags/ITP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ITP</span></a> <a href="https://mathstodon.xyz/tags/LeanProver" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LeanProver</span></a> <a href="https://mathstodon.xyz/tags/LLMs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLMs</span></a></p>
Bibliolater 📚 📜 🖋<p>🖥️ **Meet President Willian H. Brusen from the great state of Onegon**</p><p>"_LLMs still struggle with accurate text within graphics_"</p><p>🔗 <a href="https://www.theregister.com/2025/08/08/gpt-5-fake-presidents-states/" rel="nofollow noopener" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">theregister.com/2025/08/08/gpt</span><span class="invisible">-5-fake-presidents-states/</span></a>.</p><p><a href="https://qoto.org/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://qoto.org/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ArtificialIntelligence</span></a> <a href="https://qoto.org/tags/Technology" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Technology</span></a> <a href="https://qoto.org/tags/Tech" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Tech</span></a> <a href="https://qoto.org/tags/LLMS" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLMS</span></a> <a href="https://qoto.org/tags/LLM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLM</span></a></p>
Miguel Afonso Caetano<p>Sometimes humans are just too stupid and in those cases no chatbot in the world can help you... :-D</p><p>"A man gave himself bromism, a psychiatric disorder that has not been common for many decades, after asking ChatGPT for advice and accidentally poisoning himself, according to a case study published this week in the Annals of Internal Medicine.<br> <br>In this case, a man showed up in an ER experiencing auditory and visual hallucinations and claiming that his neighbor was poisoning him. After attempting to escape and being treated for dehydration with fluids and electrolytes, the study reports, he was able to explain that he had put himself on a super-restrictive diet in which he attempted to completely eliminate salt. He had been replacing all the salt in his food with sodium bromide, a controlled substance that is often used as a dog anticonvulsant. </p><p>He said that this was based on information gathered from ChatGPT. </p><p>“After reading about the negative effects that sodium chloride, or table salt, has on one's health, he was surprised that he could only find literature related to reducing sodium from one's diet. Inspired by his history of studying nutrition in college, he decided to conduct a personal experiment to eliminate chloride from his diet,” the case study reads. “For 3 months, he had replaced sodium chloride with sodium bromide obtained from the internet after consultation with ChatGPT, in which he had read that chloride can be swapped with bromide, though likely for other purposes, such as cleaning.”"</p><p><a href="https://www.404media.co/guy-gives-himself-19th-century-psychiatric-illness-after-consulting-with-chatgpt/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">404media.co/guy-gives-himself-</span><span class="invisible">19th-century-psychiatric-illness-after-consulting-with-chatgpt/</span></a></p><p><a href="https://tldr.nettime.org/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://tldr.nettime.org/tags/GenerativeAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GenerativeAI</span></a> <a href="https://tldr.nettime.org/tags/LLMs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLMs</span></a> <a href="https://tldr.nettime.org/tags/Chatbots" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Chatbots</span></a> <a href="https://tldr.nettime.org/tags/MentalHealth" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MentalHealth</span></a> <a href="https://tldr.nettime.org/tags/ChatGPT" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ChatGPT</span></a></p>
Cody Boone Ferguson<p><span class="h-card" translate="no"><a href="https://mastodon.social/@arstechnica" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>arstechnica</span></a></span> Apple ‘Intelligence’ is one of the worst <a href="https://fosstodon.org/tags/LLMs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLMs</span></a> I have seen. They still mark it BETA. No sane developer releases beta software in a MAJOR release. And after a while it’s just an excuse for releasing rubbish. Of course the beta status seems now to be only in fine print which makes it worse. People who don’t know any better rely on this. The <a href="https://fosstodon.org/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> craze is a giant <a href="https://fosstodon.org/tags/Scam" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Scam</span></a>. Adding the letters AI doesn’t make it AI and it doesn’t make it better.</p>
José A. Alonso<p>Readings shared August 8, 2025. <a href="https://jaalonso.github.io/vestigium/posts/2025/08/09-readings_shared_08-08-25" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">jaalonso.github.io/vestigium/p</span><span class="invisible">osts/2025/08/09-readings_shared_08-08-25</span></a> <a href="https://mathstodon.xyz/tags/FunctionalProgramming" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>FunctionalProgramming</span></a> <a href="https://mathstodon.xyz/tags/Haskell" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Haskell</span></a> <a href="https://mathstodon.xyz/tags/ITP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ITP</span></a> <a href="https://mathstodon.xyz/tags/LLMs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLMs</span></a> <a href="https://mathstodon.xyz/tags/LeanProver" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LeanProver</span></a> <a href="https://mathstodon.xyz/tags/Logic" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Logic</span></a> <a href="https://mathstodon.xyz/tags/Math" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Math</span></a> <a href="https://mathstodon.xyz/tags/Reasoning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Reasoning</span></a></p>

Does anyone know plausible estimates for the # of tokens of training data used for GPT-5 or Claude Opus 4 / 4.1?

I'm expecting single trillions to tens of trillions, but can't find a plausible estimate or worked example.

Expectation based on pace of growth of #LLMs recently and token exhaustion - e.g. has all the data in the world for training LLMs been exhausted yet (Sutskever's "token crisis").