lingo.lol is one of the many independent Mastodon servers you can use to participate in the fediverse.
A place for linguists, philologists, and other lovers of languages.

Server stats:

66
active users

#airisks

1 post1 participant0 posts today

🤖 Gemini’s Gmail summaries were just caught parroting phishing scams. A security researcher embedded hidden prompts in email text (w/ white font, zero size) to make Gemini falsely claim the user's Gmail password was compromised and suggest calling a fake Google number. It's patched now, but the bigger issue remains: AI tools that interpret or summarize content can be manipulated just like humans. Attackers know this and will keep probing for prompt injection weaknesses.

TL;DR
⚠️ Invisible prompts misled Gemini
📩 AI summaries spoofed Gmail alerts
🔍 Prompt injection worked cleanly
🔐 Google patched, but risk remains

pcmag.com/news/google-gemini-b
#cybersecurity #promptinjection #AIrisks #Gmail #security #privacy #cloud #infosec #AI

Replied in thread

"The pedagogical value of a writing assignment doesn’t lie in the tangible product of the work — the paper that gets handed in at the assignment’s end. It lies in the work itself: the critical reading of source materials, the synthesis of evidence and ideas, the formulation of a thesis and an argument, and the expression of thought in a coherent piece of writing. The paper is a proxy that the instructor uses to evaluate the success of the work the student has done — the work of learning. Once graded and returned to the student, the paper can be thrown away."

by Nicholas Carr: newcartographies.com/p/the-myt via @riotnrrd

New Cartographies · The Myth of Automated LearningBy Nicholas Carr
Replied in thread

"The EU is developing a Code of Practice to govern general purpose AI, as part of the implementation of the AI Act. But Big Tech has heavily influenced the process to successfully weaken the Code."

"From the second to the third draft of the Code, obligations changed to encouragements. Prohibitions became suggestions. Best efforts, reasonable efforts."

@corporateeurope: corporateeurope.org/en/2025/04

Corporate Europe ObservatoryCoded for privileged access | Corporate Europe ObservatoryThe EU is developing a Code of Practice to govern general purpose AI, as part of the implementation of the AI Act. But Big Tech has heavily influenced the process to successfully weaken the Code. 
Replied in thread

"Professional conflicting interest - The AI office relied on a consortium of external consultants to help draft the Code of Practice. However, the investigation shows that the consultancies involved have commercial ties to the #BigTech companies. For instance, the main contractor – French consultancy Wavestone – has a contract to roll out Microsoft’s generative AI tool 365 Copilot in French businesses."

corporateeurope.org/en/2025/04 @ai @eu

Corporate Europe ObservatoryEU under heavy Big Tech pressure to weaken rules on advanced AI | Corporate Europe ObservatoryAs the European Commission is set to publish the Code of Practice on General Purpose AI next month, a new study shows the Code is under intense pressure from aggressive Big Tech lobbying. The upshot is weaker protection against potential structural biases and social harms caused by AI.
Continued thread

"When ChatGPT summarises, it actually does nothing of the kind."

"If I have 35 sentences of circumstance leading up to a single sentence of conclusion, the LLM mechanism will — simply because of how the attention mechanism works with the volume of those 35 — find the ’35’ less relevant sentences more important than the single key one. So, in a case like that it will actively suppress the key sentence."

by Gerben Wierda @gctwnl: ea.rna.nl/2024/05/27/when-chat

R&A IT Strategy & Architecture · When ChatGPT summarises, it actually does nothing of the kind.One of the use cases I thought was reasonable to expect from ChatGPT and Friends (LLMs) was summarising. It turns out I was wrong. What ChatGPT isn’t summarising at all, it only looks like it…
Continued thread

"The reason you begin tracking your data is that you have
some uncertainty about yourself that you believe the data
can illuminate. It’s about introspection, reflection, seeing
patterns, and arriving at realizations about who you are
and how you might change."
—Eric Boyd, self-tracker

an article by Natasha D. Schüll, 2019, "The Data-Based Self:
Self-Quantification and the Data-Driven (Good) Life" natashadowschull.org/wp-conten

Replied in thread

The European Commission released its "AI Continent Action Plan" last week. This high-level communication lays down the various initiatives the European Commission is pursuing to support Europe's AI ambitions and AI uptake: iapp.org/news/a/a-view-from-br

IAPP · A view from Brussels: What is and isn't in the EU's AI Continent Action PlanBy Isabelle Roccia
Replied in thread

'The "growth mindset" is Microsoft's cult — a vaguely-defined, scientifically-questionable, abusively-wielded workplace culture monstrosity, peddled by a Chief Executive obsessed with framing himself as a messianic figure with divine knowledge of how businesses should work.'

'The book is centered around the theme of redemption, with the subtitle mentioning a “quest to rediscover Microsoft’s soul.” […] The dark age — Steve “Developers” Balmer’s Microsoft, with Microsoft stagnant and missing winnable opportunities, like mobile — contrasted against this brave, bright new era where a nearly-assertive Redmond pushes frontiers in places like AI.'

'Like any cult, it encourages the person to internalize their failures and externalize their successes.'

Ed Zitron: wheresyoured.at/the-cult-of-mi

Ed Zitron's Where's Your Ed At · The Cult of MicrosoftSoundtrack: EL-P - Flyentology At the core of Microsoft, a three-trillion-dollar hardware and software company, lies a kind of social poison — an ill-defined, cult-like pseudo-scientific concept called 'The Growth Mindset" that drives company decision-making in everything from how products are sold, to how your on-the-job performance is judged. I am
Continued thread

"The emphasis on human oversight as a protective mechanism allows governments and vendors to have it both ways: they can promote an algorithm by proclaiming how its capabilities exceed those of humans, while simultaneously defending the algorithm and those responsible for it from scrutiny by pointing to the security (supposedly) provided by human oversight."

Ben Green papers.ssrn.com/sol3/papers.cf via @pluralistic

papers.ssrn.comThe Flaws of Policies Requiring Human Oversight of Government AlgorithmsAs algorithms become an influential component of government decision-making around the world, policymakers have debated how governments can attain the benefits
Continued thread

#AI #bias:
“The people who will really, really know how tools are being used are refugees or incarcerated people or heavily policed communities,” Timnit Gebru said in the case. “And the issue is that, at the end of the day, those are also the communities with the least amount of power.”

in "Timnit Gebru: 'SILENCED No More' on AI Bias and The Harms of Large Language Models" by: Tsedal Neeley and Stefani Ruper: hbs.edu/faculty/Pages/item.asp

www.hbs.eduTimnit Gebru: 'SILENCED No More' on AI Bias and The Harms of Large Language Models - Case - Faculty & Research - Harvard Business School

[thread] AI chatbots, risks

New Tool to Warp Reality: Chatbots can subtly mislead users/implant false memories
theatlantic.com/technology/arc

* 1 billion people may encounter by end 2024
* M$, Meta, Apple ... integrating chatbot assistants into platforms: Facebook, Messenger, WhatsApp, Instagram, Siri ...
* <2 y after ChatGPT bots quickly becoming default filters for web

The Atlantic · Chatbots Are Primed to Warp RealityBy Matteo Wong
Replied in thread

B., the senior officer, claimed that in the current war, “I would invest 20 seconds for each target at this stage, and do dozens of them every day. I had zero added value as a human, apart from being a stamp of approval. It saved a lot of time.”

According to B., a common error occurred “if the [Hamas] target gave [his phone] to his son, his older brother, or just a random man. That person will be bombed in his house with his family. This happened often. These were most of the mistakes caused by Lavender,” B. said.

972mag.com/lavender-ai-israeli @israel @data 🧶