lingo.lol is one of the many independent Mastodon servers you can use to participate in the fediverse.
A place for linguists, philologists, and other lovers of languages.

Server stats:

54
active users

#airegulation

0 posts0 participants0 posts today

It's the 2nd of August, the day all the big AI companies had marked in their calendar (or so I hope).

From today onwards all new General Purpose AI models (GPAI) released in the EU need to fulfil certain new obligations under the EU AI Act:

That includes transparency on training data, energy consumption, incident reporting and more.

#AIRegulation #AI

medium.com/misaligned/the-eu-a

misaligned · The EU AI Act Reaches Another Milestone. Where Things Stand. | misalignedBy Wolfgang Hauptfleisch

ICYMI, only weeks away from the next phase of the EU AI Act coming into force (the AI Act rules on general-purpose AI apply from 2 August), the EU last week published its "General-Purpose AI (GPAI) Code of Practice".

The code is organised in three chapters: Transparency, Copyright, and Safety and Security. #AI #AiRegulation

digital-strategy.ec.europa.eu/

Shaping Europe’s digital futureThe General-Purpose AI Code of PracticeThe Code of Practice helps industry comply with the AI Act legal obligations on safety, transparency and copyright of general-purpose AI models.

The educator panic over AI is real, and rational.
I've been there myself. The difference is I moved past denial to a more pragmatic question: since AI regulation seems unlikely (with both camps refusing to engage), how do we actually work with these systems?

The "AI will kill critical thinking" crowd has a point, but they're missing context.
Critical reasoning wasn't exactly thriving before AI arrived: just look around. The real question isn't whether AI threatens thinking skills, but whether we can leverage it the same way we leverage other cognitive tools.

We don't hunt our own food or walk everywhere anymore.
We use supermarkets and cars. Most of us Google instead of visiting libraries. Each tool trade-off changed how we think and what skills matter. AI is the next step in this progression, if we're smart about it.

The key is learning to think with AI rather than being replaced by it.
That means understanding both its capabilities and our irreplaceable human advantages.

1/3

AI isn't going anywhere. Time to get strategic:
Instead of mourning lost critical thinking skills, let's build on them through cognitive delegation—using AI as a thinking partner, not a replacement.

This isn't some Silicon Valley fantasy:
Three decades of cognitive research already mapped out how this works:

Cognitive Load Theory:
Our brains can only juggle so much at once. Let AI handle the grunt work while you focus on making meaningful connections.

Distributed Cognition:
Naval crews don't navigate with individual genius—they spread thinking across people, instruments, and procedures. AI becomes another crew member in your cognitive system.

Zone of Proximal Development
We learn best with expert guidance bridging what we can't quite do alone. AI can serve as that "more knowledgeable other" (though it's still early days).
The table below shows what this looks like in practice:

2/3

Critical reasoning vs Cognitive Delegation

Old School Focus:

Building internal cognitive capabilities and managing cognitive load independently.

Cognitive Delegation Focus:

Orchestrating distributed cognitive systems while maintaining quality control over AI-augmented processes.

We can still go for a jog or go hunt our own deer, but for reaching the stars we, the Apes do what Apes do best: Use tools to build on our cognitive abilities. AI is a tool.

3/3

One of the last true Libertarians. I'm sure I will totally agree with him during half of the text and absolutely disagree with him in the other half.

"In Defending Technological Dynamism & the Freedom to Innovate in the Age of AI, Adam Thierer argues that human flourishing, economic growth, and geopolitical resilience requires innovation—especially in artificial intelligence. Overzealous regulation threatens to undermine this progress. If policymakers adopt a governance philosophy of permissionless innovation over the precautionary principle, however, they can foster an environment that tolerates and protects creativity, experimentation, and risk-taking."

civitasinstitute.org/research/

www.civitasinstitute.orgDefending Technological Dynamism & the Freedom to Innovate in the Age of AI | Adam ThiererHuman flourishing, economic growth, and geopolitical resilience requires innovation—especially in artificial intelligence.

"You may have noticed in the above language in the bill goes beyond “AI” and also includes “automated decision systems.” That’s likely because there are two California bills currently under consideration in the state legislature that use the term; AB 1018, the Automated Decisions Safety Act and SB7, the No Robo Bosses Act, which would seek to prevent employers from relying on “automated decision-making systems, to make hiring, promotion, discipline, or termination decisions without human oversight.”

The GOP’s new amendments would ban both outright, along with the other 30 proposed bills that address AI in California. Three of the proposed bills are backed by the California Federation of Labor Unions, including AB 1018, which aims to eliminate algorithmic discrimination and to ensure companies are transparent about how they use AI in workplaces. It requires workers to be told if AI is used in the hiring process, allows them to opt out of AI systems, and to appeal decisions made by AI. The Labor Fed also backs Bryan’s bill, AB 1221, which seeks to prohibit discriminatory surveillance systems like facial recognition, establish worker data protections, and compels employers to notify workers when they introduce new AI surveillance tools.

It should be getting clearer why Silicon Valley is intent on halting these bills: One of the key markets—if not the key market—for AI is as enterprise and workplace software. A top promise is that companies can automate jobs and labor; restricting surveillance capabilities or carving out worker protections promise to put a dent in the AI companies’ bottom lines. Furthermore, AI products and automation software promise a way for managers to evade accountability—laws that force them to stay accountable defeat the purpose."

bloodinthemachine.com/p/de-dem

Blood in the Machine · De-democratizing AIBy Brian Merchant
#USA#GOP#AI

"You’d be hard-pressed to find a more obvious example of the need for regulation and oversight in the artificial intelligence space than recent reports that Elon Musk’s AI chatbot, known as Grok, has been discussing white nationalist themes with X users. NBC News reported Thursday that some users of Musk’s social media platform noticed the chatbot was responding to unrelated user prompts with responses discussing “white genocide.”

For background, this is a false claim promoted by Afrikaners and others, including Musk, that alleges white South African land owners have been systematically attacked for the purpose of ridding them and their influence from that country. It’s a claim that hews closely to propaganda spread by white nationalists about the purported oppression of white people elsewhere in Africa.

It’s hard to imagine a more dystopian scenario than this."

msnbc.com/top-stories/latest/g

MSNBC · Elon Musk’s chatbot just showed why AI regulation is an urgent necessityBy Ja'han Jones

"Vance came out swinging today, implying — exactly as the big companies might have hoped he might – that any regulation around AI was “excessive regulation” that would throttle innovation.

In reality, the phrase “excessive regulation” is sophistry. Of course in any domain there can be “excessive regulation”, by definition. What Vance doesn’t have is any evidence whatsoever that the US has excessive regulation around AI; arguably, in fact, it has almost none at all. His warning about a bogeyman is a tip-off, however, for how all this is going to go. The new administration will do everything in its power to protect businesses, and nothing to protect individuals.

As if all this wasn’t clear enough, the administration apparently told the AI Summit that they would not sign anything that mentioned environmental costs or “existential risks” of AI that could potentially going rogue.

If AI has significant negative externalities upon the world, we the citizens are screwed."

garymarcus.substack.com/p/ever

Marcus on AI · Everything I warned about in Taming Silicon Valley is rapidly becoming our realityBy Gary Marcus

"It’s part of the established playbook that Big Tech — which Andreessen and Horowitz are closely aligned with, despite their posturing — runs at the state level where it can win (as with SB 1047), meanwhile asking for federal solutions that it knows will never come, or which will have no teeth due to partisan bickering and congressional ineptitude on technical issues.

This newly posted joint statement about “policy opportunity” is the latter part of the play: After torpedoing SB 1047, they can say they only did so with an eye to supporting a federal policy. No matter that we are still waiting on the federal privacy law that tech companies have pushed for a decade while fighting state bills.

And what policies do they support? “A variety of responsible market-based approaches.” In other words: hands off our money, Uncle Sam.

Regulations should have “a science and standards-based approach that recognizes regulatory frameworks that focus on the application and misuse of technology,” and should “focus on the risk of bad actors misusing AI,” write the powerful VCs and Microsoft execs. What is meant by this is we shouldn’t have proactive regulation but instead reactive punishments when unregulated products are used by criminals for criminal purposes.

This approach worked great for that whole FTX situation, so I can see why they espouse it.

“Regulation should be implemented only if its benefits outweigh its costs,” they also write. It would take thousands of words to unpack all the ways that this idea, expressed in this context, is hilarious. But basically, what they are suggesting is that the fox be brought in on the henhouse planning committee."

techcrunch.com/2024/11/01/micr

TechCrunch · Microsoft and A16Z set aside differences, join hands in plea against AI regulation | TechCrunchTwo of the biggest forces in two deeply intertwined tech ecosystems — large incumbents and startups — have taken a break from counting their money to