@404mediaco continuing to keep it really fucking real and roasting all the other BigTech-loving, AI-huffing , website-as-billboard "journalism" pretendians. Smoke 'em on that burn-out, baby! lol
"The Media's Pivot to AI Is Not Real and Not Going to Work"
Googles Veo 3 erstellt geniale Videos – aber die Untertitel sind völlig verrückt
Googles neuester KI-Videogenerator Veo 3 produziert beeindruckende Videos, kämpft aber mit unsinnigen Untertiteln. Nutzer und Google suchen nach einer Lösung.
I wrote a couple of related blog posts on the weekend (started as one but made more sense as two) about AI augmented coding and the state of public discussion:
https://blog.korny.info/2025/07/19/clowns-to-the-left-of-me - all about how we have a ridiculous state of debate at the moment, with "AI means we can replace all developers" on one side, and "AI causes nothing but bugs and lowers developer speed" on the other.
And I feel stuck in the middle, seeing tangible benefits and trying to work out how to effectively use AI coding at work, while dodging endless noise from both sides.
Big Tech doesn't give a flying fuck about how harmful it is to you, the planet, or humanity. Do not let their tactical blanket-forced ubiquity of it lull you into complacency. That's an assault, not a service. For everyone's sake, treat that threat with the hostility it deserves. Thank you for reading if you made it this far. /EndRant The Harms of Generative AI is a recommended reading list: https://cryptpad.fr/doc/#/2/doc/view/TRCe-UOHGzLJF3OwR8px0HJC6Pm1c6EMf1FUoymaAcI
Generative AI has a very disproportionate energy and carbon footprint with very little in terms of positive stuff for the environment —Sasha Luccioni, in Karen Hao, Empire of AI #generativeai#ai
"War. Climate change. Unemployment. Against these headline-dominating issues, AI still feels like a gimmick to many. Yet experts warn that AI will reshape all of these issues and more - to say nothing of potential changes to our work and relationships. The question is: do people see the connection? What will make them care?
This research is the first large-scale effort to answer those questions. We polled 10,000 people across the U.S., U.K., France, Germany, and Poland to understand how AI fits into their broader hopes and fears for the future (...) The truth is that people are concerned that AI will worsen almost everything about their daily lives, from relationships and mental health to employment and democracy. They’re not concerned about “AI” as a concept; they’re concerned about what it will do to the things they already care about most. (...) People continue to rank AI low in their list of overall concerns. But we have discovered that there is a strong latent worry about AI risks, because people believe AI will make almost everything they care about worse.
This concern is not even. Rather, it plays into existing societal divisions, with women, lower-income and minority respondents most concerned about AI risks.
When it comes to what we worry about when we worry about AI, we have found that concern to be evolving rapidly. People worry most about relationships, more even than about their jobs.
People don't perceive AI as a catastrophic risk like war or climate change; though 1 in 3 are worried that AI might pursue its own goals outside our control, this is actually a lower proportion than some surveys found for the same question two years ago.
Instead, our respondents see AI as a pervasive influence that modifies risk in a host of other areas, with concern about specific harms on the rise."
Despite turning off every "Galaxy AI" option on this #Samsung phone, the AI assist icon still appears in android context menus, like when you copy and paste. Would be willing to pay money to make it go away. #android#enshitification#ai#llm#generativeai !fediexclusive
EDIT: Im aware of alternate android roms, no need to suggest them.
It bothers me that so many LLM/genAI applications seem to be all about "now that we have new tool X, what can we do with it" while completely ignoring the question "for problem Y, what is the best tool for the job?"
Perhaps unsurprisingly for developers where we have strong evidence of poor ethics (e.g., uncritically using big-brand LLMs), I suspect that many of the people behind these systems care more about the exhiliration of using new tech and the prestige it might bring them than any of the problems they might claim to solve (if they even bother to identify such things at all). Turns out that's a great way to cause a lot of harm in the world, since you likely won't do a good job of measuring outcomes (if you even bother to do so) and you especially won't carefully look for systemic biases or ways your system might unintentionally hurt/exclude people. You also won't be concerned about whether your system ends up displacing efforts that would have led to better solutions.
This paper by @minxdragon is well written, IMHO. It draws together observations on a range of topics and contemporary social/political trends that may appear on superficial examination to be unconnected. This research does an excellent job of digging deeper, and revealing their interconnected nature, and the underlying mechanisms and motivations at play.
"I wanted to see if interacting with AI slop on Facebook would have a similar effect. To that end I created a Facebook account and liked the pages responsible for the “I need a Husband” posts. I started seeing signs of alt-right content within 24 h and in less than a week the account was algorithmically served pornographic posts, misogyny, racism, military AI slop and religious propaganda. I particularly noticed the chain letter style of Christianity posts, with a white AI Jesus imploring viewers to like, comment, share and subscribe to the videos to receive blessings and wealth."
⇒ Please help me find #GenAI truth-telling sites! ⇐ In the past I've come across several websites that effectively debunk #GenerativeAI hype. However, now that I actually need them, to help me make the case at work for strong oversight of the company's GenAI use, I can't find any of them. It seems like no matter what search terms and search engine I use, I get garbage search results (hype, indeed!). What are your go-to websites for debunking #AI hype? #tech#LLM
"It's hard to say exactly when these AI obituaries first began appearing, but they've clearly exploded in the past year.
NewsGuard, a misinformation watchdog that tracks AI content, identified just 49 sites as "unreliable AI-generated news sites" with little human oversight when it started tracking them in May 2023. That number stands at 1,200 today.
"A lot of the sites are specific and focused solely on creating obituaries, whereas others are just basic content farms that publish a range of content," says McKenzie Sadeghi, NewsGuard's AI and Foreign Influence editor.
I found more than 20 websites publishing AI obituaries while researching this story, but I got the sense that the true number was much higher — and impossible to definitively capture. They seemed to come and go in rapid succession. One day I'd see one on a domain like deltademocrattimes.space; the next day it would redirect to a page of cascading popups that crashed my browser.
Joshua Braun, an associate professor at the University of Massachusetts Amherst who studies profit-driven hoaxes, tells me that the goal for spam sites isn't just to get eyes on ads — it's also to camouflage bot traffic that's used to drive up page views.
"When it comes to taking in ad revenue, drawing real visitors is part of the game, but a lot of it is also pumping in fake traffic," he says. "Drawing enough human visitors would throw off the detection mechanisms that might otherwise take note of all the automated traffic."
Sometimes, the people being memorialized aren't even real. Scheirer tells me he first became aware of AI obituaries a couple years ago when he began seeing classmates he didn't recognize on a page for alumni from his high school."
"Professor Gina Neff of Queen Mary University London tells the BBC that ChatGPT is "burning through energy", and the data centres used to power it consume more electricity in a year than 117 countries."
Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity
Joel Becker, Nate Rush, Beth Barnes, David Rein
Model Evaluation & Threat Research (METR)
"Despite widespread adoption, the impact of AI tools on software development in the wild remains understudied. We conduct a randomized controlled trial (RCT) to understand how AI tools at the February–June 2025 frontier affect the productivity of experienced open-source developers. 16 developers with moderate AI experience complete 246 tasks in mature projects on which they have an average of 5 years of prior experience. Each task is randomly assigned to allow or disallow usage of early-2025 AI tools. When AI tools are allowed, developers primarily use Cursor Pro, a popular code editor, and Claude 3.5/3.7 Sonnet. Before starting tasks, developers forecast that allowing AI will reduce completion time by 24%. After completing the study, developers estimate that allowing AI reduced completion time by 20%. Surprisingly, we find that allowing AI actually increases completion time by 19%—AI tooling slowed developers down. This slowdown also contradicts predictions from experts in economics (39% shorter) and ML (38% shorter). To understand this result, we collect and evaluate evidence for 20 properties of our setting that a priori could contribute to the observed slowdown effect—for example, the size and quality standards of projects, or prior developer experience with AI tooling. Although the influence of experimental artifacts cannot be entirely ruled out, the robustness of the slowdown effect across our analyses suggests it is unlikely to primarily be a function of our experimental design."