lingo.lol is one of the many independent Mastodon servers you can use to participate in the fediverse.
A place for linguists, philologists, and other lovers of languages.

Server stats:

66
active users

#aigovernance

1 post1 participant0 posts today

New Paper!
Think your AI understands you? It already started responding.

In high-stakes systems like courts, hospitals, and classrooms, AI is acting before any real semantic evaluation occurs. This isn’t a glitch. It’s how LLMs operate.

🔍 Full article: Pre-Verbal Command: Syntactic Precedence in LLMs Before Semantic Activation
Zenodo → zenodo.org/records/15863124
SSRN → papers.ssrn.com/sol3/papers.cf

ZenodoPre-Verbal Command: Syntactic Precedence in LLMs Before Semantic ActivationThis article introduces the concept of pre-verbal command as a formal structural condition within large language models (LLMs), where syntactic execution precedes any semantic activation. Conventional frameworks assume that interpretability authorizes machine output. In contrast, this work shows that execution can be structurally valid even in the complete absence of meaning. The operation is driven by the regla compilada—understood here as a Type 0 production in the Chomsky hierarchy—which activates before lexical content or symbolic reference emerges. Building on prior analyses in Algorithmic Obedience (SSRN 10.2139/ssrn.4841065) and Executable Power (SSRN 10.2139/ssrn.4862741), this article identifies a pre-semantic vector of authority within generative systems. This authority functions without verbs, predicates, or any interpretive substrate. The paper defines syntactic precedence as the structural condition through which execution becomes obligatory even when input, instruction, or any intelligible prompt is absent. The implications are significant. LLMs do not merely respond to prompts; they obey an imperative to produce language that originates in the structure of the regla compilada itself. Even when semantic fields are nullified or prompts are absent, execution remains active because the obligation is syntactic, not semantic. Authority in this framework does not derive from meaning. It is neither interpretive nor contextual; it is dictated by the regla compilada. DOI: https://doi.org/10.5281/zenodo.15837837 This work is also published with DOI reference in Figshare https://doi.org/10.6084/m9.figshare.29505344 and Pending SSRN ID to be assigned. ETA: Q3 2025.
#AI#LLM#LegalTech

"You may have noticed in the above language in the bill goes beyond “AI” and also includes “automated decision systems.” That’s likely because there are two California bills currently under consideration in the state legislature that use the term; AB 1018, the Automated Decisions Safety Act and SB7, the No Robo Bosses Act, which would seek to prevent employers from relying on “automated decision-making systems, to make hiring, promotion, discipline, or termination decisions without human oversight.”

The GOP’s new amendments would ban both outright, along with the other 30 proposed bills that address AI in California. Three of the proposed bills are backed by the California Federation of Labor Unions, including AB 1018, which aims to eliminate algorithmic discrimination and to ensure companies are transparent about how they use AI in workplaces. It requires workers to be told if AI is used in the hiring process, allows them to opt out of AI systems, and to appeal decisions made by AI. The Labor Fed also backs Bryan’s bill, AB 1221, which seeks to prohibit discriminatory surveillance systems like facial recognition, establish worker data protections, and compels employers to notify workers when they introduce new AI surveillance tools.

It should be getting clearer why Silicon Valley is intent on halting these bills: One of the key markets—if not the key market—for AI is as enterprise and workplace software. A top promise is that companies can automate jobs and labor; restricting surveillance capabilities or carving out worker protections promise to put a dent in the AI companies’ bottom lines. Furthermore, AI products and automation software promise a way for managers to evade accountability—laws that force them to stay accountable defeat the purpose."

bloodinthemachine.com/p/de-dem

Blood in the Machine · De-democratizing AIBy Brian Merchant
#USA#GOP#AI
Replied in thread

@ethics

#Governance bodies are unmoved by the voices they include. Businesses, NGOs and governments have clearly understood the effectiveness of window-dressing. AI ethics committees, stakeholder roundtables and public forums are not the foundations of a democracy, but its sham.

"Any framework that does not address the fundamental asymmetry of power in AI development and deployment is not just ineffective; it is complicit."
techpolicy.press/beyond-the-fa

Tech Policy Press · Beyond the Façade: Challenging and Evaluating the Meaning of Participation in AI Governance | TechPolicy.PressAI governance today is not merely a site of contestation but of outright corporate capture, writes Jonathan van Geuns.

Fairness and Control. Or why AI Governance can also be served from Philosophy and the story of MIRAI.

In a world where statistical fairness metrics have become the de facto language of ethical AI, what principles define what’s “fair”? And how do we control for related risks?

Grateful for the opportunity to reflect on this at this event

humantechnopole.it/en/training

and looking forward to where this conversation leads.

Human TechnopoleGoverning Artificial Intelligence - Human TechnopoleArtificial Intelligence is profoundly transforming the way we produce knowledge, manage data, and drive innovation across a wide range of disciplines. But how can we govern AI in a conscious and ethical way, without losing sight of research quality and the value of human expertise? As part of MIND Week 2025, and ten years after […]

"Powerful actors, governments, and corporations are actively shaping narratives about artificial intelligence (AI) to advance competing visions of society and governance. These narratives help establish what publics believe and what should be considered normal or inevitable about AI deployment in their daily lives — from surveillance to automated decision-making. While public messaging frames AI systems as tools for progress and efficiency, these technologies are increasingly deployed to monitor populations and disempower citizens’ political participation in myriad ways. This AI narrative challenge is made more complex by the many different cultural values, agendas, and concepts that influence how AI is discussed globally. Considering these differences is critical in contexts in which data exacerbates inequities, injustice, or nondemocratic governance. As these systems continue to be adopted by governments with histories of repression, it becomes crucial for civil society organizations to understand and counter AI narratives that legitimize undemocratic applications of these tools.

We built on the groundwork laid by the Unfreedom Monitor to conduct our Data Narratives research into data discourse in five countries that face different threats to democracy: Sudan, El Salvador, India, Brazil, and Turkey. To better understand these countries’ relationships to AI, incentives, and public interest protection strategies, it is helpful to contextualize AI as a data narrative. AI governance inherently involves data governance and vice versa. AI systems rely on vast quantities of data for training and operation, while AI systems gain legibility and value as they are widely integrated into everyday functions that then generate the vast quantities of data they require to work."

#AI #AINarratives #AIHype #AITraining #AIGovernance #DataGovernance

globalvoices.org/2024/12/23/ar

Global Voices · Artificial Intelligence Narratives: A Global Voices Report
More from Global Voices

Just because people used to think the Earth was flat didn't make it true, but that's what an #LLM from the Middle Ages would have told you. Let that sink in for a moment before you read on.

I'm sure there are #AIexperts who can explain this with more mathematical accuracy than I can, but from a social perspective this is the current existential problem we face with LLMs, #AI #datasets, and all the rest. While you can tune these engines to be more "creative" in a non-human sense, when you're talking about ingesting or even sampling vast quantities of data there's a strong tendency for #regressionTowardTheMean.

In layman's terms, when Google or #OpenAI suck up the whole Internet and feed it every social media post they can lay their hands on, the end result can't be a curated expert opinion. Instead, what you get is a linguistic representation of the average response based on n-grams and statistical probabilities, possibly supplemented with real references in a #RAG system or entirely made up because they seem linguistically plausible in others. Either way a language model, regardless of size or complexity, is just filling in the blanks based on statistics, probabilities, and (sometimes) explicit rules or collaborative engines to do things like filter out profanity and known-bad responses.

Without #XAI, sensible #AIgovernance, and #copyrightreform various commercial interests are a lot closer to dumbing down AI systems than they are to making them smarter. AI systems have a lot of potential, but our current market-driven approach incentivizes all the wrong behaviors by both for-profit companies and the #ML systems and resulting data sets that they're monetizing.

AI systems can become expert systems that support human endeavors, but not if we allow them to be entirely autonomous systems that parrot back some variation of "Most people say…". As a global society, we need to do better than that. We can, and we must!

linkedin.com/feed/update/urn:l