lingo.lol is one of the many independent Mastodon servers you can use to participate in the fediverse.
A place for linguists, philologists, and other lovers of languages.

Server stats:

64
active users

#digitallabor

0 posts0 participants0 posts today

📄 What if we viewed digital economies not just as systems, but as labor-atories - sites of active class struggle and experimentation? In his new #wjds paper, Rafael Grohmann (@uoft) explores how digital labor in Latin America reflects this dynamic.

➡️ doi.org/10.34669/wi.wjds/5.1.6

#research #socialscience #work #DigitalLabor #DigitalEconomies #PlatformWork #LatinAmerica #GlobalSouth #DigitalSovereignty #AI #DataColonialism #TechGovernance #WorkerOrganizing
@DAIR @towardsfairwork

Today, AI is still powered by millions of data workers, both men and women. In Victorian England, many of the same data tasks were performed by "lady computers". Though they were recruited at Cambridge colleges, they got paid as little as £4 per month. Same old story.

#SocialMedia #ArabWorld #TrollFarms #Disinformation #DigitalLabor: "This article investigates the production culture and routines of “troll farms” in three Arab countries—Tunisia, Egypt, and Iraq—from a production studies approach. A production studies approach enables us to focus on the working conditions of paid trolls. We employed qualitative methods to look inside the “black box” of Arab troll farms. From February to April 2020, we conducted semi-structured interviews with eight disinformation workers at both managerial and staff levels. We propose to understand disinformation work as a specific type of digital labor, characterized by very intense shifts and emotionally burdensome daily tasks, absence of legal job contracts, and highly surveilled work environments. The article contributes to understand disinformation practices outside and beyond the West; it situates disinformation activities within the broader context of digital media industries; it provides a detailed analysis of the features that distinguish troll farms in the Arab world from those that emerged in other regions of the Global South; and it reconnects the research on disinformation to digital labor studies."

journals.sagepub.com/doi/full/

What a doozy.
New chatGPT revelations.

Kenyan workers paid less than $2/p/h were required to read through large tracts of harmful and abusive content, in order to label and teach the AI what was bad:

“To get those labels, OpenAI sent tens of thousands of snippets of text to an outsourcing firm in Kenya, beginning in November 2021. Much of that text appeared to have been pulled from the darkest recesses of the internet. Some of it described situations in graphic detail like child sexual abuse, bestiality, murder, suicide, torture, self harm, and incest.”

The hiring company to which this work was outsourced claims to be about ‘ethical AI’ and boasts of having lifted people out of poverty. But what were the actual ethics and harms considered in doling out this sort of lowly paid, trauma inducing work?

I’m returned once again to the sombre thought that the digital world and its markets, for all its hype about automation, AI, and delivery at the press of a button, is absolutely still reliant on the exploitation and labour of ordinary flesh and blood. It’s the case whether we’re considering content moderation or your poor local courier driver trying to put food in the table with umpteen contracts.

It’s the dank Victorian factory, still. But it’s everywhere, all at once, now.

time.com/6247678/openai-chatgp

Time · Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less ToxicBy Billy Perrigo
Continued thread

"Whenever a user solves a reCAPTCHA task they ... produce value that is then extracted and used by Google. These users are simultaneously produced by their own labor by contributing to normative definitions of who 'counts' as an authentic Web user."

Sites with reCAPTCHA challenges may be inaccessible to people who are deaf, blind, have certain forms of neurodivergence, or lack certain cultural familiarities.

Continued thread

Because the main incentive for Google is to develop #captcha challenges that will produce valuable AI/ML data, other important factors, such as accessibility, are not prioritized.

This means that reCAPTCHA doesn't just tell computers and humans apart, but it really tells "desirable users" and "undesirable users" apart. Not all people are equitably able to solve a reCAPTCHA, so only those who can perform such "digital labor" are viewed as ideal Web users