lingo.lol is one of the many independent Mastodon servers you can use to participate in the fediverse.
A place for linguists, philologists, and other lovers of languages.

Server stats:

69
active users

#artificial_intelligence

0 posts0 participants0 posts today

Want do do a PhD in Computer Science in the heart of Europe? We are hiring!

10 FWF-funded positions at TU Wien for doctoral students in our newly founded doctoral college on

Automated Reasoning (forsyte.at/docfunds/)

Come to Vienna (repeatedly ranked the world's most livable city) to work with an amazing team on on exciting topics at the intersection of security and artificial intelligence with Automated Reasoning at the core!

Deadline: May 18, 2025
Start: October 2025 (or soon after)
Details: forsyte.at/docfunds/

forsyte.atFORSYTE.at

[ThioJoe] uses #NotebookLM to generate an entire podcast including voice audio, not just a script, about a text file containing just the words "shart and fart"
youtu.be/iV6TS7Ww4Ds
The human speech synthesis is highly convincing. How much are you willing to bet #Google #DeepMind trained their #AI model using real podcasts without first asking the creators for permission?
#generativeAI #artificial_intelligence

[AI Explained] assesses the technical report, release notes and benchmarks surrounding #OpenAI #ChatGPT #o1pro mode
youtu.be/AeMvOPkUwtQ
Better at math and programming but still fails at commonsense reasoning tasks about as often as previous models, nowhere close to #AGI. For some tasks paying Open AI $200/month for the #o1promode access gets you worse results
#AIhype #GenerativeAI #artificial_intelligence

[2410.16454] Does your #LLM truly unlearn? An embarrassingly simple approach to recover unlearned knowledge

arxiv.org/abs/2410.16454 #AI #Machinelearning #Science #artificial_intelligence

arXiv.orgDoes your LLM truly unlearn? An embarrassingly simple approach to recover unlearned knowledgeLarge language models (LLMs) have shown remarkable proficiency in generating text, benefiting from extensive training on vast textual corpora. However, LLMs may also acquire unwanted behaviors from the diverse and sensitive nature of their training data, which can include copyrighted and private content. Machine unlearning has been introduced as a viable solution to remove the influence of such problematic content without the need for costly and time-consuming retraining. This process aims to erase specific knowledge from LLMs while preserving as much model utility as possible. Despite the effectiveness of current unlearning methods, little attention has been given to whether existing unlearning methods for LLMs truly achieve forgetting or merely hide the knowledge, which current unlearning benchmarks fail to detect. This paper reveals that applying quantization to models that have undergone unlearning can restore the "forgotten" information. To thoroughly evaluate this phenomenon, we conduct comprehensive experiments using various quantization techniques across multiple precision levels. We find that for unlearning methods with utility constraints, the unlearned model retains an average of 21\% of the intended forgotten knowledge in full precision, which significantly increases to 83\% after 4-bit quantization. Based on our empirical findings, we provide a theoretical explanation for the observed phenomenon and propose a quantization-robust unlearning strategy to mitigate this intricate issue...

There is this fact going around that #chatgpt et. al. cannot count the R's in strawberry. This is not completely true.

Both humans and GPTs "think" heuristically, that is: quickly, but error prone. We need to remind ourselves to think slowly. If we remind ChatGPT to think slowly, i.e. using programming to count Rs, it answers correctly. 🧵1/2

#AI#GPT#genAI

it just occurred to me that we haven't heard #techbros go anywhere near, and #science has stopped; comparing the "smart"-ness of an #artificial_intelligence to an x year old child; and even one #llm told me this: "Artificial intelligence (AI) is often compared to the cognitive abilities of a human child, but it’s essential to understand that AI is not as smart as a one-year-old child."

let that sink in, if you get a response from an #ai / llm, and you base decisions you make on that, you are handing #reasoning over to roughly an infant.

i don't think i'd trust a one year old with my finances, let alone how to drive a car, pick who should and shouldn't get #welfare or run a country.