lingo.lol is one of the many independent Mastodon servers you can use to participate in the fediverse.
A place for linguists, philologists, and other lovers of languages.

Server stats:

64
active users

#GenerativeAI

12 posts12 participants0 posts today

@404mediaco continuing to keep it really fucking real and roasting all the other BigTech-loving, AI-huffing , website-as-billboard "journalism" pretendians. Smoke 'em on that burn-out, baby! lol

"The Media's Pivot to AI Is Not Real and Not Going to Work"

404media.co/the-medias-pivot-t

404 Media · The Media's Pivot to AI Is Not Real and Not Going to WorkAI is not going to save media companies, and forcing journalists to use AI is not a business model.

Googles Veo 3 erstellt geniale Videos – aber die Untertitel sind völlig verrückt

Googles neuester KI-Videogenerator Veo 3 produziert beeindruckende Videos, kämpft aber mit unsinnigen Untertiteln. Nutzer und Google suchen nach einer Lösung.

heise.de/news/Googles-Veo-3-er

heise online · Googles Veo 3 erstellt geniale Videos – aber die Untertitel sind völlig verrücktBy Rhiannon Williams

I wrote a couple of related blog posts on the weekend (started as one but made more sense as two) about AI augmented coding and the state of public discussion:

blog.korny.info/2025/07/19/clo - all about how we have a ridiculous state of debate at the moment, with "AI means we can replace all developers" on one side, and "AI causes nothing but bugs and lowers developer speed" on the other.

And I feel stuck in the middle, seeing tangible benefits and trying to work out how to effectively use AI coding at work, while dodging endless noise from both sides.

And also blog.korny.info/2025/07/18/a-r linked from the first, which tries to give a real-world sample of where I've used AI augmented coding, with care and sensible oversight.

(sorry fediverse, I posted this to other evil social media first and forgot y'all!)

#ai #llm #generativeAI #skepticism

I'd love comments!

Korny's Blog · Clowns to the left of me …I’ve had the song “Stuck in the Middle with You” in my head for a few weeks. (R.I.P. Michael Madsen!) 1 But not because of Reservoir Dogs - but because of the public discussion about AI coding tools. (Yes, I know… feel free to walk away if you are sick of the whole thing). I feel like there’s this strange culture war, or something like it, playing out - with wild statements on both extremes - and I’m stuck in the middle. Hype To the left of me There is just so much AI Hype. I’m talking here mainly about software development tools. There’s plenty more ludicrous hype when it comes to other AI areas, but I’m trying to limit this to software engineering. And the hype, as well as the naïveté, is extreme. You get people vibe coding their entire business applications with no thoughts of security. You get people claiming 50x speed improvements, or indeed “we don’t need developers at all”. You get people posting “I’m not a programmer but I used Copilot to build my entire product and it’s awesome”, with multiple variations of this. Online discussion forums seem to be full of highly risky advice - “I just turn on --dangerously-skip-permissions” or “use this MCP server which gives write access to your git repo and reads user-supplied comments”2 or even worse. Amusingly there’s also quite a few comments like “oops I deleted my whole file system, what do I do now?” or “I’m not a programmer and I got Copilot to build my product but now it’s broken and won’t change anything - I want my money back”. There’s some tasty schadenfreude here but I also feel a bit sorry for some of these people, where things started so nicely, but now technical debt, AI slop, and a lack of the knowledge of what “good” looks like, are making it all fall apart. A lot of the hype is just marketing - astroturfing from fake users, or just plain press releases breathlessly reported by the media, or marketing via dubious research articles. “Look at our amazing new model, it has so much more data than the last one, it is reasoning now! We ran these benchmarks to prove it!” A lot of the hype though does seem to be genuine users - lured by the quick result, the slick prototype, the dopamine hit of seeing all that code produced, without the boring course-corrections that feel like waste. Once you are high on the “look how much code I can make” drug, it’s hard not to evangelise it to everyone else. And as the last year or two have shown us, it’s very easy for people to be fooled by LLMs, which excel at looking like something they are not. People anthropomorphise the tools all the time - “Why did Claude do this dumb thing? Can’t it see the example I’m looking at of how to do it?” - they start to think this is genuine intelligence that can reason and learn, not a specific set of tools. LLMs are wonderful machines that read your data and questions and produce results in a way that feels like intelligence, but is actually just really clever pattern matching and a surrounding ecosystem of context sources and tools. Sometimes the results are amazing, occasionally they are terrible, and you always need to check the results because the process is fundamentally nondeterministic, and just because 99% of the time something worked, there’s always that 1% chance it was confidently wrong. Skeptics to the right On the other side - the anti-AI sentiment is also pretty wild. I think most of these folks are well meaning - far more so than the pro-AI hypers; my sympathy is with healthy skepticism in general. But they are also prone to jumping on hype - for one example the Your Brain on ChatGPT paper, which is still in pre-print, not peer reviewed, and has had some serious criticism, still got a huge amount of coverage, including Time Magazine - this includes some classic moral panic language: Her team did submit it for peer review but did not want to wait for approval, which can take eight or more months, to raise attention to an issue that Kosmyna believes is affecting children now. Oh my goodness, will nobody protect our children?! Similarly the recent study on experienced open-source developer productivity is being waved around to say “this proves they don’t work” - I think this has been shared multiple times on every single tech forum I frequent. The authors of the paper evidently expected this, and provided this table, which doesn’t seem to get as much mention as their headlines: And this interesting breakdown of likely contributing factors: This study is actually pretty interesting - and it does show where we should be cautious to assume self-assessment of how good these tools are. And probably real limitations in large complex codebases. But it’s no “Ah-ha! The emperor has no clothes!” moment, as far as I can tell. (After I wrote this, I found that Simon Willison has a good discussion of this paper as well - and there’s a rather more severe critique at Cat Hick’s blog ) I also see quite a few people who have tried the most basic, un-assisted, low-context tools, and get terrible results; and then rule out AI tools as fundamentally broken. “I used copilot and its suggestions are wrong 40% of the time, often ludicrously wrong”. This was where I was at 6 months ago - Copilot seemed like a handy yet often irritating Clippy, no big deal. I think this drives a lot of skepticism, people who feel they gave it a go, it didn’t live up to the hype, so they’ve made up their minds. And generally there’s just a lot of anger and frustration, in reaction to the constant flood of hype: As I said before, I’m more sympathetic to the skeptics than the hypers. Especially when it comes to the broader AI industry - I’m always keen to read David Gerard’s Pivot to AI or any of Ed Zitron’s rants - see also my downsides section later. But - I do find that there’s a lot of talk about AI software development tools, that just plain conflicts with my personal experience. Stuck in the middle So here’s the problem - every day I’m flooded with articles that are ludicrously positive and ludicrously negative. But what I’m seeing doesn’t match either. I personally find the tools helpful, powerful, and a definite boost. Maybe, as per the METR study, I’m losing more time learning the tools, and tweaking the context, and reading and experimenting and correcting when they get wrong, compared to the time actually saved. But some of this is the startup costs with any new technology; some will only be paid once, some will be a slow gradual tax, especially with a technology that is changing so fast. And some will be a learning curve for us to learn when to say “Ok, this task isn’t suited to LLMs and I should just do it by hand”. And they are already giving me a bunch of obvious speedups, small and large. Claude is fixing the links in this blog as I type. Claude wrote the tiny python script I use daily to list our project’s outstanding pull requests. Claude wrote a little visualisation of git activity I needed for management. Claude is drawing simple Mermaid diagrams in our docs. Claude helped me use Snyk to find that our project had an insecure dependency, and Sourcebot to find that another project of ours had the same dependency and had a viable workaround. And for a larger example, I’ve written up a separate blog post detailing how I used Claude Code to implement a Kafka messaging feature in an ASP.Net Core application. This demonstrates what can actually be done with AI coding tools today - not the wild hype, not the complete dismissal, but practical reality. What’s next? I’m still learning - I’ve made masses of progress in the couple of months since I started using the tools in anger, and there’s a lot more to learn! I also want to learn how to guide our organisation, so our developers know how to use these tools effectively, carefully, and productively. It’s an exciting time - I’m having more fun with these tools than I expected. There are so many benefits already, and so much potential for more. But… Don’t forget the downsides I need this standard disclaimer at the end of any AI post. We must remember the context behind these tools - there are giant tech companies pushing these hard into every corner of our lives. They are run by horrible tech broligarchs3 whose interests are personal power and destabilising democracy, not helping the world. They consume vast amounts of power, which due to our failure to charge for externalities, mean they are burning fossil fuels, consuming scarce water, and accelerating the climate crisis. And there are many signs that the funding for this is an unsustainable bubble and the companies and tools may collapse, or start charging significantly more and/or enshittifying the experience of users. Further reading I’m not alone, stuck here in the middle. For some good sensible approaches I’d also recommend Birgitta Böckeler and Pete Hodgson and of course Simon Willison’s blog is essential reading. Image from Reservoir Dogs (1992), directed by Quentin Tarantino. Miramax Films. Fair use for commentary and criticism. ↩ Essential reading: Simon Willison on the lethal trifecta for AI agents ↩ Thanks Carole Cadwalla for introducing me to the very useful term Broligarchy! ↩
Replied in thread

Big Tech doesn't give a flying fuck about how harmful it is to you, the planet, or humanity. Do not let their tactical blanket-forced ubiquity of it lull you into complacency. That's an assault, not a service. For everyone's sake, treat that threat with the hostility it deserves. Thank you for reading if you made it this far. 🩵💙💚 /EndRant
The Harms of Generative AI is a recommended reading list: cryptpad.fr/doc/#/2/doc/view/T

cryptpad.frEncrypted DocumentCryptPad: end-to-end encrypted collaboration suite

"War. Climate change. Unemployment. Against these headline-dominating issues, AI still feels like a gimmick to many. Yet experts warn that AI will reshape all of these issues and more - to say nothing of potential changes to our work and relationships. The question is: do people see the connection? What will make them care?

This research is the first large-scale effort to answer those questions. We polled 10,000 people across the U.S., U.K., France, Germany, and Poland to understand how AI fits into their broader hopes and fears for the future
(...)
The truth is that people are concerned that AI will worsen almost everything about their daily lives, from relationships and mental health to employment and democracy. They’re not concerned about “AI” as a concept; they’re concerned about what it will do to the things they already care about most.
(...)
People continue to rank AI low in their list of overall concerns. But we have discovered that there is a strong latent worry about AI risks, because people believe AI will make almost everything they care about worse.

This concern is not even. Rather, it plays into existing societal divisions, with women, lower-income and minority respondents most concerned about AI risks.

When it comes to what we worry about when we worry about AI, we have found that concern to be evolving rapidly. People worry most about relationships, more even than about their jobs.

People don't perceive AI as a catastrophic risk like war or climate change; though 1 in 3 are worried that AI might pursue its own goals outside our control, this is actually a lower proportion than
some surveys found for the same question two years ago.

Instead, our respondents see AI as a pervasive influence that modifies risk in a host of other areas, with concern about specific harms on the rise."

report2025.seismic.org/

@51north/project-template-nuxtSeismic Report 2025 | Seismic FoundationIn this report, through original research, we show how public opinion about AI is changing.

It bothers me that so many LLM/genAI applications seem to be all about "now that we have new tool X, what can we do with it" while completely ignoring the question "for problem Y, what is the best tool for the job?"

Perhaps unsurprisingly for developers where we have strong evidence of poor ethics (e.g., uncritically using big-brand LLMs), I suspect that many of the people behind these systems care more about the exhiliration of using new tech and the prestige it might bring them than any of the problems they might claim to solve (if they even bother to identify such things at all). Turns out that's a great way to cause a lot of harm in the world, since you likely won't do a good job of measuring outcomes (if you even bother to do so) and you especially won't carefully look for systemic biases or ways your system might unintentionally hurt/exclude people. You also won't be concerned about whether your system ends up displacing efforts that would have led to better solutions.

I need husband: AI beauty standards, fascism and the proliferation of bot driven content
link.springer.com/article/10.1

This paper by @minxdragon is well written, IMHO. It draws together observations on a range of topics and contemporary social/political trends that may appear on superficial examination to be unconnected. This research does an excellent job of digging deeper, and revealing their interconnected nature, and the underlying mechanisms and motivations at play.

"I wanted to see if interacting with AI slop on Facebook would have a similar effect. To that end I created a Facebook account and liked the pages responsible for the “I need a Husband” posts. I started seeing signs of alt-right content within 24 h and in less than a week the account was algorithmically served pornographic posts, misogyny, racism, military AI slop and religious propaganda. I particularly noticed the chain letter style of Christianity posts, with a white AI Jesus imploring viewers to like, comment, share and subscribe to the videos to receive blessings and wealth."

#AI #GenerativeAI #MaleGaze #AISlop
#AltRightPipeline #Fascism #Misogyny #Racism #Sexism #IronyPoisoning #Facebook #ShrimpJesus

SpringerLinkI need husband: AI beauty standards, fascism and the proliferation of bot driven content - AI & SOCIETYGenerative AI is proliferating on social media at an alarming rate. Images are generated and disseminated with political agendas, particularly in right-wing spheres. These AI-generated images often depict soldiers, sad children, or interior designs. Of particular note are the catfishing-style “I need husband” posts featuring women with impossible proportions, ostensibly seeking partners. These chimeric creations are bot-driven posts designed to farm engagement, but they also hint at something more sinister. These posts reflect a mechanical view of the male gaze. However, an AI cannot truly comprehend the male gaze, and in its attempt to mimic it, it creates beings beyond understanding. This research aims to analyze the patterns in these images, explore posting methods and engagement, and examine the meaning behind the images. It culminates in an artistic piece in progress critiquing both the images and their creation and dissemination methods. By rendering these AI-generated images as classical Greek statues through Gaussian splatting and 3D printing, I aim to create a visual commentary on the intersection of AI, the male gaze and fascism. This artistic approach not only highlights the absurdity of these digital constructs but also invites viewers to critically examine AI’s role in shaping contemporary perceptions of beauty and gender roles.

⇒ Please help me find #GenAI truth-telling sites! ⇐
In the past I've come across several websites that effectively debunk #GenerativeAI hype.
However, now that I actually need them, to help me make the case at work for strong oversight of the company's GenAI use, I can't find any of them.
It seems like no matter what search terms and search engine I use, I get garbage search results (hype, indeed!).
What are your go-to websites for debunking #AI hype?
:boostRequest: #tech #LLM

"It's hard to say exactly when these AI obituaries first began appearing, but they've clearly exploded in the past year.

NewsGuard, a misinformation watchdog that tracks AI content, identified just 49 sites as "unreliable AI-generated news sites" with little human oversight when it started tracking them in May 2023. That number stands at 1,200 today.

"A lot of the sites are specific and focused solely on creating obituaries, whereas others are just basic content farms that publish a range of content," says McKenzie Sadeghi, NewsGuard's AI and Foreign Influence editor.

I found more than 20 websites publishing AI obituaries while researching this story, but I got the sense that the true number was much higher — and impossible to definitively capture. They seemed to come and go in rapid succession. One day I'd see one on a domain like deltademocrattimes.space; the next day it would redirect to a page of cascading popups that crashed my browser.

Joshua Braun, an associate professor at the University of Massachusetts Amherst who studies profit-driven hoaxes, tells me that the goal for spam sites isn't just to get eyes on ads — it's also to camouflage bot traffic that's used to drive up page views.

"When it comes to taking in ad revenue, drawing real visitors is part of the game, but a lot of it is also pumping in fake traffic," he says. "Drawing enough human visitors would throw off the detection mechanisms that might otherwise take note of all the automated traffic."

Sometimes, the people being memorialized aren't even real. Scheirer tells me he first became aware of AI obituaries a couple years ago when he began seeing classmates he didn't recognize on a page for alumni from his high school."

cnet.com/tech/services-and-sof

CNETDigital Grave-Robbing: How AI Is Plundering Online ObituariesThe rush to monetize grief leads to the creation of AI obituaries, turning personal loss into clickbait and exposing the dark side of online memorials.

"Professor Gina Neff of Queen Mary University London tells the BBC that ChatGPT is "burning through energy", and the data centres used to power it consume more electricity in a year than 117 countries."

Source:
"Everyone's jumping on the AI doll trend - but what are the concerns?", BBC News, 12 April 2025
bbc.co.uk/news/articles/c5yg69

On the left, a picture of Zoe. She is smiling. She has shoulder-length blonde hair, a blue jacket and a silver necklace. On the right, an image generated using ChatGPT of a doll-like version of her. The doll has the same clothes and necklace - but has morphed her dark eyes into a light green, and darkened her hair.
BBC NewsChatGPT AI action dolls: Concerns around the Barbie-like viral social trendAs online users create Barbie-like dolls of themselves, experts urge caution over AI's energy and data use.

Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity

Joel Becker, Nate Rush, Beth Barnes, David Rein

Model Evaluation & Threat Research (METR)

"Despite widespread adoption, the impact of AI tools on software development in the wild remains understudied. We conduct a randomized controlled trial (RCT) to understand how AI tools at the February–June 2025 frontier affect the productivity of experienced open-source developers. 16 developers with moderate AI experience complete 246 tasks in mature projects on which they have an average of 5 years of prior experience. Each task is randomly assigned to allow or disallow usage of early-2025 AI tools. When AI tools are allowed, developers primarily use Cursor Pro, a popular code editor, and Claude 3.5/3.7 Sonnet. Before starting tasks, developers forecast that allowing AI will reduce completion time by 24%. After completing the study, developers estimate that allowing AI reduced completion time by 20%. Surprisingly, we find that allowing AI actually increases completion time by 19%—AI tooling slowed developers down. This slowdown also contradicts predictions from experts in economics (39% shorter) and ML (38% shorter). To understand this result, we collect and evaluate evidence for 20 properties of our setting that a priori could contribute to the observed slowdown effect—for example, the size and quality standards of projects, or prior developer experience with AI tooling. Although the influence of experimental artifacts cannot be entirely ruled out, the robustness of the slowdown effect across our analyses suggests it is unlikely to primarily be a function of our experimental design."

metr.org/Early_2025_AI_Experie