lingo.lol is one of the many independent Mastodon servers you can use to participate in the fediverse.
A place for linguists, philologists, and other lovers of languages.

Server stats:

60
active users

#anthropic

1 post1 participant0 posts today

A for-profit corporation that makes money out of other users' content is accusing another company of trying to do exactly the same. How amusing can that be? :-D

"Reddit said the AI company unlawfully used Reddit’s data for commercial purposes without paying for it and without abiding by the company’s user data policy, according to the complaint, which was filed Wednesday in California.

“Anthropic is in fact intentionally trained on the personal data of Reddit users without ever requesting their consent,” the complaint says, alleging that Anthropic’s conduct runs counter to how it “bills itself as the white knight of the AI industry.”

Reddit, the online discussion forum where users can post anonymously and ask each other questions, has reached formal agreements with both OpenAI and Google to license Reddit’s valuable human user data.

Anthropic didn’t immediately comment."

wsj.com/tech/ai/reddit-lawsuit

𝘼𝙣𝙩𝙝𝙧𝙤𝙥𝙞𝙘 𝙡𝙖𝙪𝙣𝙘𝙝𝙚𝙨 𝙖 𝙫𝙤𝙞𝙘𝙚 𝙢𝙤𝙙𝙚 𝙛𝙤𝙧 𝘾𝙡𝙖𝙪𝙙𝙚

So now you can yell at your AI… "Why is my vibe-coded software not working, you piece of 🤬🤬🤬❗"

techcrunch.com/2025/05/27/anth

TechCrunch · Anthropic launches a voice mode for Claude | TechCrunchAnthropic has begun to roll out a "voice mode" for its Claude chatbot apps.

"I’ve been tracking llm-tool-use for a while. I first saw the trick described in the ReAcT paper, first published in October 2022 (a month before the initial release of ChatGPT). I built a simple implementation of that in a few dozen lines of Python. It was clearly a very neat pattern!

Over the past few years it has become very apparent that tool use is the single most effective way to extend the abilities of language models. It’s such a simple trick: you tell the model that there are tools it can use, and have it output special syntax (JSON or XML or tool_name(arguments), it doesn’t matter which) requesting a tool action, then stop.

Your code parses that output, runs the requested tools and then starts a new prompt to the model with the results.

This works with almost every model now. Most of them are specifically trained for tool usage, and there are leaderboards like the Berkeley Function-Calling Leaderboard dedicated to tracking which models do the best job of it.

All of the big model vendors—OpenAI, Anthropic, Google, Mistral, Meta—have a version of this baked into their API, either called tool usage or function calling. It’s all the same underlying pattern.

The models you can run locally are getting good at this too. Ollama added tool support last year, and it’s baked into the llama.cpp server as well.

It’s been clear for a while that LLM absolutely needed to grow support for tools. I released LLM schema support back in February as a stepping stone towards this. I’m glad to finally have it over the line.

As always with LLM, the challenge was designing an abstraction layer that could work across as many different models as possible. A year ago I didn’t feel that model tool support was mature enough to figure this out. Today there’s a very definite consensus among vendors about how this should work, which finally gave me the confidence to implement it."

simonwillison.net/2025/May/27/

Simon Willison’s WeblogLarge Language Models can run tools in your terminal with LLM 0.26LLM 0.26 is out with the biggest new feature since I started the project: support for tools. You can now use the LLM CLI tool—and Python library—to grant LLMs from …

#AI #GenerativeAI #2001SpaceOdyssey #HAL9000 #Anthropic
While “Claude blackmailed an employee” may sound like dialogue from a mandatory HR workplace training video, it’s actually a real problem Anthropic ran into during test runs of its newest AI model.
Released on Thursday, Anthropic considers its two Claude models—Opus 4 and Sonnet 4—the new standards for “coding, advanced reasoning, and AI agents." But in safety tests, Claude got messy in a manner fit for a Lifetime movie.

SEE ALT TEXT

More fun with #AI to start my day. A buddy of mine has a some web forum data where doctors and nurses answered people's questions: 20 years worth of Q&A. He asks me how he might train an AI model on that data. I have a vague idea and tell him "you can fine tune a foundation model with that data." To save myself typing I ask #AmazonQDeveloper :

I have a bunch of posts on a web forum stored in a SQL database...I want to use Amazon Bedrock to create a chatbot that answers questions using a fine-tuned model. Write out the high-level steps I will need to take.
(I said a bunch of stuff in that ellipsis about what the data was)

Among the many steps it tells me to do, it says:

Format your data according to #Anthropic's fine-tuning specifications for #Claude

That's interesting. So I ask: where can i get more information on Anthropic's fine-tuning specifications for Claude?

It replies:

Amazon Q is having trouble responding right now: 
0: unhandled error (InternalServerException)
1: service error
2: unhandled error (InternalServerException)

I chuckle, press the up arrow, and resubmit my query. It replies:

I apologize, but I need to correct my previous response. I made a mistake in suggesting that Claude could be fine-tuned through Amazon Bedrock. Currently, Claude models (including Claude Sonnet) on Amazon Bedrock are not available for fine-tuning by customers.

Great. Thanks, Q. It's like working with a chipper intern.

Why public benefit corporations won’t fix the ethics of platform capitalism

I wrote a couple of months ago about my scepticism that Bluesky will retain its ethical stances in the face of investor pressure. There’s no path to federation they’ve committed to, at a point where they’d be relatively free to do so, making it seem unlikely they’ll gut the commercialisation model at a future point when investors could push back. The obvious retort to this is that Bluesky is a public benefit corporation but, as Catherine Bracy points out in the (excellent) World Eaters, from pg 189:

While PBCs are a positive development in corporate governance, moving away from the misguided concept of shareholder supremacy that has dominated capitalism for the last century, they still have significant shortcomings. The biggest is that they don’t require companies to behave a certain way. They just provide protection for those executives who choose to put mission over profit. The companies that want to enact stricter protocols that mandate certain behavior no matter who is in charge are mostly left to create their own governance structures.

In other words it provides internal cover for sustaining commitment to a mission but it’s still dependent on motivated actors, who are operating within a system of incentives which makes it difficult to sustain a mission beyond growth and profitability. It doesn’t ‘lock in’ the mission, only ensures that it remains formally on the agenda in a discursive sense. Consider OpenAI’s hybrid structure which is arguably closer to a ‘lock in’ than being a public benefit corporation. From pg 189 of the same book:

There are a few notable examples of these bespoke structures in tech, most famously the one employed by OpenAI, which puts the for-profit entity that develops and markets ChatGPT under the control of a nonprofit whose mission is to “ensure that artificial general intelligence benefits all of humanity.” The company also places a cap on the amount of returns that investors in the for-profit entity can make, an interesting indicator that it understands just how much investor returns can influence product and business model decisions.

And Anthropic’s even more onerous hybrid structure, from pg 190:

One of OpenAI’s main competitors, Anthropic AI (which was founded by a breakaway faction of OpenAI employees who were even more concerned about AI safety risks), also has constructed a bespoke governance model with the intention of protecting the company’s mission from the vagaries of investor demands. Anthropic’s model is a hybrid. They are incorporated as a Public Benefit Corporation in Delaware, but they have also created what they call a Long-Term Benefit Trust (LTBT) that, by 2027, will have the authority to select a majority of the company’s board members. The trustees who oversee the LTBT are selected based on their commitment to and expertise around the safe deployment of artificial intelligence and will have no financial stake in the company. The terms of the trust arrangement also require the company to report to the trustees “actions that could significantly alter the corporation or its business.”

We’ve already seen Altman begin to dismantle OpenAI’s governance structure, supported by a workforce who, Bracy suggests, rallied around him after the sacking due to concerns about the value of their stock options. I think Altman’s motives have as much to do with power, particularly vis-a-vis the board, as profit in driving this dismantling of governance structures he played a significant role in designing. But fund raising will generically play a role in driving resistance to these governance structures, as Bracy notes on pg 192:

The ability to raise money while adopting an alternative structure also reflects an enormous amount of privilege on the part of these companies’ founders. The vast majority of entrepreneurs are not able to drive the kind of bargain Altman and the Anthropic team did with their investors, even in times when VCs have more money to invest than they know what to do with. Even Altman found it difficult, telling me, “It was very hard to raise under this structure. Most investors looked at it and said ‘absolutely not, I’m not capping my profits.’ ” Creating a system in which any founder can do what Altman and his cofounders did will require much deeper structural change.

While I hope Anthropic’s governance structure remains intact, not least of all because I think a reactionary Claude would be the most dangerous of the frontier models, the idea that public benefit corporations and complex governance mechanisms (consider Meta’s oversight board as well) will be sufficient to produce ethical outcomes is self-evidently implausible. The problem, as Bracy argues, in a really incisive book arises from, the incentive structure of the innovation ecosystem itself. From pg 169:

That process, of continuously raising more venture capital in order to demonstrate value to future-round funders rather than focusing on building a solid business with strong fundamentals, is what creates bubbles. It is, more than any inherent risk associated with investing in startups, why Silicon Valley is such a boom-bust sector. Given what’s at stake for venture capitalists, it is extremely difficult for founders to find off-ramps that might allow them to retain control of their companies and operate in accordance with what’s best for customers, employees, and the long-term sustainability of the business instead of what will create the highest valuation in the venture capital marketplace.

What she’s talking about her could be frame in terms of the interplay of the micro-social (founders, VC partners and key staff seeking fame and fortune) and the meso-social (the organisational dynamics of growing a firm under these conditions) within a very specific structure of incentives provided by the innovation ecosystem and the political, legal and economic climate of late neoliberalism. The turn towards public benefit corporations and ethical governance is a welcome shift but it does nothing to change the overarching context, nor does it produce fundamentally different types of firms.

Mark Carrigan · Academic networks need to prepare for waves of enshittification
More from Mark Carrigan

"A lawyer representing [an AI startup company] Anthropic admitted to using an erroneous citation created by the company’s Claude AI chatbot in its ongoing legal battle with music publishers, according to a filing made in a Northern California court on Thursday."

techcrunch.com/2025/05/15/anth

TechCrunch · Anthropic’s lawyer was forced to apologize after Claude hallucinated a legal citation | TechCrunchA lawyer representing Anthropic used Claude to generate citations in a court filing, then it hallucinated.
Continued thread

Claude LLM: Introducing web search on the Anthropic API
anthropic.com/news/web-search-
news.ycombinator.com/item?id=4

... Web search is now available on the Anthropic API for Claude 3.7 Sonnet, the upgraded Claude 3.5 Sonnet, and Claude 3.5 Haiku at $10 per 1,000 searches plus standard token costs.

The current free version (limited yet still awesome use) is Claude 3.7.
Very high-performing LLM.

Same comments re:
* DeepSeek DeepThink (R1)
* (Google) Gemini 2.0 Flash

www.anthropic.comIntroducing web search on the Anthropic APIToday, we're introducing web search on the Anthropic API—a new tool that gives Claude access to current information from across the web.