lingo.lol is one of the many independent Mastodon servers you can use to participate in the fediverse.
A place for linguists, philologists, and other lovers of languages.

Server stats:

59
active users

#chatbots

1 post1 participant1 post today

"If you want a job at McDonald’s today, there’s a good chance you'll have to talk to Olivia. Olivia is not, in fact, a human being, but instead an AI chatbot that screens applicants, asks for their contact information and résumé, directs them to a personality test, and occasionally makes them “go insane” by repeatedly misunderstanding their most basic questions.

Until last week, the platform that runs the Olivia chatbot, built by artificial intelligence software firm Paradox.ai, also suffered from absurdly basic security flaws. As a result, virtually any hacker could have accessed the records of every chat Olivia had ever had with McDonald's applicants—including all the personal information they shared in those conversations—with tricks as straightforward as guessing that an administrator account's username and password was “123456."

On Wednesday, security researchers Ian Carroll and Sam Curry revealed that they found simple methods to hack into the backend of the AI chatbot platform on McHire.com, McDonald's website that many of its franchisees use to handle job applications. Carroll and Curry, hackers with a long track record of independent security testing, discovered that simple web-based vulnerabilities—including guessing one laughably weak password—allowed them to access a Paradox.ai account and query the company's databases that held every McHire user's chats with Olivia. The data appears to include as many as 64 million records, including applicants' names, email addresses, and phone numbers."

wired.com/story/mcdonalds-ai-h

WIRED · McDonald’s AI Hiring Bot Exposed Millions of Applicants' Data to Hackers Using the Password ‘123456’By Andy Greenberg
Continued thread

…Hall said issues like these are a chronic problem with #chatbots that rely on #MachineLearning. In 2016, #Microsoft released an #AI #chatbot named Tay on Twitter. Less than 24 hours after its release, Twitter users baited Tay into saying #racist & #antisemitic statements, including praising #Hitler. Microsoft took the chatbot down & apologized.

Tay, #Grok & other #AI chatbots with live access to the internet seemed to be incorporating real-time information, which Hall said carries more risk.

Continued thread

Patrick Hall, who teaches #data #ethics & #MachineLearning at George Washington University, said he's not surprised #Grok ended up spewing toxic #content, given that the #LLMs that power #chatbots are initially trained on unfiltered online data.

“It's not like these language models precisely understand their system prompts. They're still just doing the statistical trick of predicting the next word.”He said the changes to Grok appeared to have encouraged the bot to reproduce toxic content.
#tech

Replied in thread

"The pedagogical value of a writing assignment doesn’t lie in the tangible product of the work — the paper that gets handed in at the assignment’s end. It lies in the work itself: the critical reading of source materials, the synthesis of evidence and ideas, the formulation of a thesis and an argument, and the expression of thought in a coherent piece of writing. The paper is a proxy that the instructor uses to evaluate the success of the work the student has done — the work of learning. Once graded and returned to the student, the paper can be thrown away."

by Nicholas Carr: newcartographies.com/p/the-myt via @riotnrrd

New Cartographies · The Myth of Automated LearningBy Nicholas Carr