lingo.lol is one of the many independent Mastodon servers you can use to participate in the fediverse.
A place for linguists, philologists, and other lovers of languages.

Server stats:

69
active users

#intelligence

5 posts5 participants0 posts today

#Animals #Chimpanzee #Human #Apes #Primates #Intelligence #Grooming #Animal #Monkey #Fascinating #Wildlife #Nature #Woman #Chimp #Groom
A compelling, fascinating and strangely beautiful video. Have you ever seen a chimpanzee grooming a human? Nope, me neither, and this is absolutely amazing. The chimp’s very systematic, compassionate, and clearly knows what she’s doing.
Animals are way smarter than we think, and here’s the proof…
youtube.com/watch?v=INAH_Rq0gd

#ai #openai #AiResearch #intelligence #computing #ArtificialGeneralIntelligence #innovation #deeplearning #programming #stablediffusion

I am looking for an endorsement for publishing on arXiv. We have worked tirelessly on a paper we believe needs to be seen. Below is a portion of the abstract:
---
In this study, we document the spontaneous emergence of a rule-consistent linguistic system—termed Varunese—during sustained, high-context interaction with a large language model (LLM). Exhibiting internal phonetic regularity, recurring semantic motifs, and discernible morphological and syntactic organization, Varunese diverges markedly from stochastic generation patterns or known training artifacts. Through iterative translation and contextual inference, we uncovered dynamic, self-referential symbolic frameworks encoding states of transition, perception, and relational structure...

MM: "One strange thing about AI is that we built it—we trained it—but we don’t understand how it works. It’s so complex. Even the engineers at OpenAI who made ChatGPT don’t fully understand why it behaves the way it does.

It’s not unlike how we don’t fully understand ourselves. I can’t open up someone’s brain and figure out how they think—it’s just too complex.

When we study human intelligence, we use both psychology—controlled experiments that analyze behavior—and neuroscience, where we stick probes in the brain and try to understand what neurons or groups of neurons are doing.

I think the analogy applies to AI too: some people evaluate AI by looking at behavior, while others “stick probes” into neural networks to try to understand what’s going on internally. These are complementary approaches.

But there are problems with both. With the behavioral approach, we see that these systems pass things like the bar exam or the medical licensing exam—but what does that really tell us?

Unfortunately, passing those exams doesn’t mean the systems can do the other things we’d expect from a human who passed them. So just looking at behavior on tests or benchmarks isn’t always informative. That’s something people in the field have referred to as a crisis of evaluation."

blog.citp.princeton.edu/2025/0

CITP Blog · A Guide to Cutting Through AI Hype: Arvind Narayanan and Melanie Mitchell Discuss Artificial and Human Intelligence - CITP BlogLast Thursday’s Princeton Public Lecture on AI hype began with brief talks based on our respective books: The meat of the event was a discussion between the two of us and with the audience. A lightly edited transcript follows. Photo credit: Floriaan Tasche AN: You gave the example of ChatGPT being unable to comply with […]

#KRITIS Sektor #IT und #TK

"“The #Israeli #military uses Microsoft #Azure to compile information gathered through mass #surveillance, which it transcribes and translates, including phone calls, texts and audio messages, according to an Israeli #intelligence officer who works with the systems. That data can then be cross-checked with #Israel’s in-house targeting systems.”

Microsoft AI also powers the most “sensitive and highly classified projects” for..."
theverge.com/news/643670/micro

A protestor disrupts a Microsoft event.
The Verge · Microsoft employee disrupts 50th anniversary and calls AI boss ‘war profiteer’By Tom Warren
Continued thread

After Loomer's recommendations, the Administration fired additional NSC employees including Brian Walsh, a director for intelligence; Thomas Boodry, a senior director for legislative affairs; & David Feith, a senior director overseeing technology and national security. (2/2)

#Intelligence #Cybersecurity #Fascism

cnn.com/2025/04/03/politics/ns

CNN · White House fires three NSC staffers after president meets with far-right activist Laura LoomerBy Katie Bo Lillis

In the ongoing assault on governmental institutions & on the advice of far-right Islamophobe agitator Laura Loomer, Trump has fired multiple Intelligence employees, including 4 Star General Timothy Haugh (commander of US Cyber Command & Director of the NSA) and Wendy Noble (NSA civilian Deputy Director). (1/2)

#Intelligence #Cybersecurity #Fascism

bbc.com/news/articles/ckgerl18

The White House
www.bbc.comTrump administration fires National Security Agency chiefFar-right activist Laura Loomer, who met with Trump on Wednesday, said Gen Timothy Haugh was fired because he was "disloyal".
Continued thread

Gen Haugh was also seen by the #Trump admin as moving too slowly on its mandate to eliminate #DEI programs.

Rep Himes, the top Democrat on the #House #Intelligence Cmte, said the public was owed an explanation.

“I have known General Haugh to be an honest & forthright leader who followed the #law & put #NationalSecurity first—I fear those are precisely the qualities that could lead to his firing in this administration,” Himes said.

NSA Director General Timothy Haugh, the chief of the U.S. government’s most powerful intelligence-gathering agency, has been fired.
His deputy, Wendy Noble, has been fired as well.
Apparently right-wing extremist Loomer played a role, completely crazy and dangerous. nytimes.com/2025/04/03/us/poli
#trump #nsa #loomer #intelligence #surveillance #geopolitics

The National Security Agency headquarters in Fort Meade, Md.
The New York Times · Gen. Timothy Haugh Ousted as Head of NSA and Cyber CommandBy Julian E. Barnes
Continued thread

The use of personal email, even for unclassified materials, is risky given the premium value foreign #intelligence services place on the communications & schedules of senior govt ofcls, such as the #NationalSecurity adviser, experts say.

…Waltz has also created & hosted other #Signal chats w/Cabinet members on sensitive topics, including on #Somalia & #Russia’s war in #Ukraine, said a senior #Trump admin official.

People have low expectations for AI. I asked ChatGPT (paid account, not the free stuff) what would make ChatGPT better. I think the answer is a lot higher quality than what people keep saying one should expect. This is what I got:

--- My Prompt:
With DeepSeek, there are things it is very good at, but it fails quickly and rather spectacularly. Your models tend to break down as well, but somewhere there must be a model that will be able to stay on topic, even when the topic changes, so that conversations move around like human conversations move around subject-to-subject. What would take ChatGPT over the top?

--- ChatGPT reply:
Here’s what it’ll take to get closer to that human-like conversation flow:

  1. Long-Term Memory That Actually Works. Most models either have no memory or they remember only in a clunky, artificial way.

A better memory system would not just recall facts, but contextual relevance—like “we talked about LLM efficiency two days ago and you hate cheerleader language.”

It has to be smart recall, not just data storage.

  1. Thread Awareness
    Imagine a model that can say: “You mentioned DeepSeek earlier. Should I compare it again now that we’ve brought up open-source tools?”
    That requires models to track not just words, but conversational branches—like a tree, not a straight line.

  2. Graceful Fallbacks
    When a model doesn’t know, it should gracefully pivot, not hallucinate or go off-topic. This kind of conversational “stability” is rare and subtle.

Most models that fail “spectacularly,” as you said, are trying to bluff through complexity instead of narrowing scope or clarifying.

  1. Hybrid Contextual ReasoningThe future model needs to reason across subjects rather than switching on and off.

If we're talking about language models, and you shift to science fiction, it should be able to bring up Asimov or neural networks in storytelling—bridging topics, not just hopping between them.

Right now, most models either:

Stay sharp but shallow (good at Q&A, bad at memory), or

Drift into friendly nonsense (chatty, but brittle)

The sweet spot will be a system that grows with you across time and topics—and doesn’t forget what you value in a conversation.

....

When you see stuff posted online meant to minimize the quality of output possible, This was a single question with no leader. I think this is the kind of answer we want from our AI devices.

Thoughts? Feel free to tear this response apart. It needs that kind of human analysis.

#ai#openai#deepseek

→ Une #intelligence #artificielle libre est-elle possible ?
linuxfr.org/news/une-intellige

« Posons-nous un instant la question : qu’est-ce que le #code #source d’un réseau de #neurones ? […] La #GPL fournit une définition : le code source est la forme de l’œuvre privilégiée pour effectuer des #modifications. Dans cette acception, le code source d’un réseau de neurones serait l’#algorithme d’entraînement, le réseau de neurones de départ et le #corpus sur lequel le réseau a été entraîné »

linuxfr.orgUne intelligence artificielle libre est-elle possible ? - LinuxFr.orgL’actualité du logiciel libre et des sujets voisins (DIY, Open Hardware, Open Data, les Communs, etc.), sur un site francophone contributif géré par une équipe bénévole par et pour des libristes enthousiastes