lingo.lol is one of the many independent Mastodon servers you can use to participate in the fediverse.
A place for linguists, philologists, and other lovers of languages.

Server stats:

59
active users

#mturk

0 posts0 participants0 posts today

Excited to share YEARS of research about how to get people to think reflectively and how reflection impacts philosophical judgments at the 2025 #APA in #NewYorkCity (January 8 to 11): apaonline.org/mpage/2025easter

Can't make it?
- More about my talk: researchgate.net/publication/3
- More about my poster: researchgate.net/publication/3

Thanks to the #APA, James Beebe, and the Experimental Philosophy Society for the opportunity!

Are moral decisions impacted by economic environment?

Perhaps! “…in low-income nations, tournament-based compensation increased deontological commitments [but] in higher-income nations, the effect on deontological commitments reversed …consistent with the historical development of the doux commerce thesis.”

A fun #economics paper that reminds me of the early days of #xPhi: doi.org/10.1093/jleo/ewae016

What conclusion do you draw, if any, when all (and only) the people who wrote in "non-binary" to the question, "What is your gender?" failed a survey's attention check?

Why would someone who puts in the effort to write a (vs. select a pre-written) response have been inattentive earlier in the survey?

I doubt that people reporting non-binary gender are generally less attentive on surveys.

So was it most likely a fluke?

Study of #mTurk workers concludes that #dataQuality and #replicability are no longer ensured by mTurk's worker quality metrics (HITs completed and HIT acceptance rate) because "researchers do not reject HITs containing poor-quality data" (N = 900).

Some researchers block workers who produce poor-quality data, but unless we share/aggregate our block lists, we allow known-bad work to infect our colleagues' research. It's a #collectiveActionProblem!

doi.org/10.3758/s13428-022-019

SpringerLinkEvaluating CloudResearch’s Approved Group as a solution for problematic data quality on MTurk - Behavior Research MethodsMaintaining data quality on Amazon Mechanical Turk (MTurk) has always been a concern for researchers. These concerns have grown recently due to the bot crisis of 2018 and observations that past safeguards of data quality (e.g., approval ratings of 95%) no longer work. To address data quality concerns, CloudResearch, a third-party website that interfaces with MTurk, has assessed ~165,000 MTurkers and categorized them into those that provide high- (~100,000, Approved) and low- (~65,000, Blocked) quality data. Here, we examined the predictive validity of CloudResearch’s vetting. In a pre-registered study, participants (N = 900) from the Approved and Blocked groups, along with a Standard MTurk sample (95% HIT acceptance ratio, 100+ completed HITs), completed an array of data-quality measures. Across several indices, Approved participants (i) identified the content of images more accurately, (ii) answered more reading comprehension questions correctly, (iii) responded to reversed coded items more consistently, (iv) passed a greater number of attention checks, (v) self-reported less cheating and actually left the survey window less often on easily Googleable questions, (vi) replicated classic psychology experimental effects more reliably, and (vii) answered AI-stumping questions more accurately than Blocked participants, who performed at chance on multiple outcomes. Data quality of the Standard sample was generally in between the Approved and Blocked groups. We discuss how MTurk’s Approval Rating system is no longer an effective data-quality control, and we discuss the advantages afforded by using the Approved group for scientific studies on MTurk.