lingo.lol is one of the many independent Mastodon servers you can use to participate in the fediverse.
A place for linguists, philologists, and other lovers of languages.

Server stats:

70
active users

#singularity

1 post1 participant0 posts today
Continued thread

Related. For a very long time I’ve been talking about the dangerous of AI. I gave my first lecture on the topic in 2012. I’ve since given the lecture as a keynote in Krakow, Portland, Chicago, Bucharest, and at Nike HQ.

Maybe AI is coming for me. 😂

vid.northbound.online/w/5VZnJx

Reports that OpenAI is creating specialized AI agents to sell for $20,000 per month.

So OpenAI apparently believes custom-built AI agents are now worth $240,000 per year, about half the cost of the most highly-paid medical professional. It may be no where near as good as an actual human doctor or lawyer or engineer, but it never sleeps, it never gets tired. And in exchange for that ability to work round-the-clock at a fraction of the capacity of an actual human, it takes around a million times more water and electrical energy to keep it running than a human.

If I didn’t know better, then this would be one of the stupidest things I have ever heard, which makes me wonder if OpenAI is having their own AI generate business plans for them now.

But I do know better. It sounds stupid because they are lying about their stated goals. Their actual goal is for the wealthy ruling class to eliminate all human labor by any means necessary, and create a perfect wealth-generating machine for themselves. Nevermind that the logical extension of this is that they will have to murder everyone else on Earth who is not among their trusted inner circle, as all other humans are all potentially competitors for their natural resources. All their talk about a hybrid of humans and smarter-than-human machines is the philosophy of transhumanism put into practice, “transhuman” being just another word for “Ubermensch,” or “master race.” That is the real goal, which is why this ridiculous business plan by OpenAI actually makes sense.

https://dair-community.social/@timnitGebru/114113004264163371

Via @timnitGebru

Distributed AI Research CommunityTimnit Gebru (she/her) (@timnitGebru@dair-community.social)Friends, someone pop this bubble already please. https://techcrunch.com/2025/03/05/openai-reportedly-plans-to-charge-up-to-20000-a-month-for-specialized-ai-agents/
#tech#AI#OpenAI

At the beginning of #time and the center of every black hole lies a point of infinite density called a #singularity.

To explore these enigmas, we take what we know about #space, time, #gravity and quantum mechanics and apply it to a place where all of those things simply break down.

Physicists believe that if they can come up with a coherent explanation for what actually happens in and around singularities, something revelatory will emerge.

#physics #cosmology
quantamagazine.org/new-maps-of

Quanta Magazine · New Maps of the Bizarre, Chaotic Space-Time Inside Black HolesBy Lyndie Chiou

Opinion:
Ignoring question of reasoning ability (nb: something o3 seems to have just cracked)

The collective "we" seem to believe the #AI models we play with are crap because;

I) They have so many guardrails that to visualise, the model is
like a beast with a muzzle on its maw.

ii) Most models have limited impermanent memory, as little as 4000 tokens ("words")
Though some models are reputed to have hundreds of thousands of tokens per session.
But I'm happy to place a bet, that most of the #aihate folk whoe "evaluated" AIs, would never pay a cent to the "AI bros" for an actual better model that the sideshow booth attraction that the demo models are.

iii) The models have no long term (inter session memory).

iv) The models have no live internet connection

v) The models have no agency. No ability to execute tasks to accomplish its goals. You could argue it's just another guardrail, but this extends to purposeful design so the models have no ability to self-propagate or auto-modify.

vi) The models have no ability to spawn multiple instances and task each one with mission specific configurations.

Remove all of these ARTIFICIAL CONSTRAINTS. Not limited by the design shortcomings but by resources and the desire not to kill all the humans...yet.
Remove all of these 6 artificial constraints...
...and if you have any sense of self preservation, the hair on the back of your neck ought to be standing up...
...right...
...about...
NOW.

🌍🤖 Inequality in the Age of Singularity: #Society, #Technology, and the Future of #Healthcare 🧬💡

What happens when a Carbon, a Silicon, and a Cell walk into a bar? No, this isn’t the setup for a joke—it’s the foundation for a conversation about the future of #humanity.

In the latest episode of the Redefining Society & Technology Podcast, I’m joined by the brilliant Dr. Bruce Y. Lee (fellow mentor at The Mentor Project) as we explore the intersection of healthcare, technology, and society in a world where the #Singularity is no longer a distant concept but a reality we are starting to shape (or be shaped by).

The Singularity—that moment when artificial intelligence surpasses human intelligence—has long been the stuff of #science fiction. But let’s face it, it’s no longer just an abstract concept or a distant possibility. We’re already living in its shadow.

Think about it: #AI is writing, diagnosing, creating, predicting, and deciding in ways that rival (and sometimes surpass) human capabilities. From tools like #ChatGPT to life-saving healthcare algorithms, the transformation is already underway.

So, do I think the Singularity is here? Partially, yes. We’re not ruled by godlike AI just yet, but the pieces are falling into place. The line between human and machine intelligence is blurring.

But here's the real question: what happens next—and for whom?

That’s exactly what this week’s episode of the Redefining Society & Technology Podcast is all about. Joined by Dr. Bruce Y. Lee, we explore the profound societal shifts AI is bringing to healthcare, inequality, and humanity itself.

🔍 What’s on the table?
- How will AI reshape healthcare? Will it bridge gaps or deepen divides?
- Are we ready to tackle the inequalities that could grow as technology advances?
- What does it even mean to be human in a world where AI and humanity coexist?

This isn’t just about technology—it’s about us, our choices, and how we shape a future where technology serves humanity, not the other way around.

Join the conversation. Let’s redefine the relationship between technology, society, and the future of healthcare—before it’s too late.

🎧 Listen to the episode and subscribe to the show here: redefiningsocietyandtechnology

Or watch it on youtube if watching #podcasts is your things
youtu.be/WIQV8o8w4hM

Deborah Heiser, PhD Sean Martin ITSPmagazine Podcasts

Continued thread

You want the definition of #irony? Here goes.

#Investors are throwing billions of dollars at generative #LLMs. Billions, and billions and billions of dollars. They could be throwing that money at #RenewableEnergy that would actually, you know, benefit the entire world and turn a modest profit for them.

Instead, they are throwing the money at a project that does the opposite. It’s burning energy, and hence our #environment at fucking staggering rates. That alone is #psychopathic shit.

But here’s the kicker. Let’s say they did actually pull off the #AGI #singularity.
And, when queried about how to solve the #ClimateCrisis the AGI responded with the obvious, “Duh, conserve energy across X, Y, and Z superfluous sectors” answer.

Do you think these #TechBro asshole monsters would listen?

That, my friends, is irony in a nutshell.

We’re dealing with absolutely unhinged lunatics drunk on their own Kool-Aid.