lingo.lol is one of the many independent Mastodon servers you can use to participate in the fediverse.
A place for linguists, philologists, and other lovers of languages.

Server stats:

62
active users

#moralpsychology

0 posts0 participants0 posts today

Do moral dilemmas elicit competing intuitions? Not in all countries

Consider two options:
- Reduce great harm even when that requires causing a smaller amount of harm (a la utilitarianism)
- Do no harm, even when that allows more harm than necessary (a la deontology)

In the U.S., the moral appropriateness of utilitarian and deontological options correlated NEGATIVELY, but in China they correlated POSITIVELY!

doi.org/10.1177/19485506241289

Moral comparisons of utilitarian tradeoffs depended on the rating protocol?

Participants rated pairs of utilitarian tradeoffs. Relative differences for each pair depended on
- whether participants saw both tradeoffs at the same time or separately.
- whether the rating was comparative or quantitative.

Sometimes (although not most of the time), the average relative difference for one protocol reversed in the other protocol!

doi.org/10.1016/j.cognition.20

Does studying economics turn students "into unscrupulous calculating machines"?

In "a sample of #Polish undergraduate students of #Economics (N=408) and #Sociology (N=123) ...we observed that the choices of more advanced #economists-to-be [we]re more #deontological (grounded in norms) than #utilitarian (grounded in benefits) [suggesting] that economic education does not...."

dx.doi.org/10.14254/2071-789X.

Are there social class differences in moral decision-making?

When dissociating anti-social from deontological and utilitarian responses to moral dilemmas, social class predicted differences in the latter two response patterns among thousands of German speakers — the class differences in utilitarian thinking were partially mediated by reflection and empathic concern.

doi.org/10.3389/fsoc.2024.1391

How does disgust impact judgments about criminality? Does legal expertise matter?

In an online study of over 1400 laypeople and legal professionals a “virtual child pornography vignette (characterized as low in harm, high in disgust) was criminalized more readily than the financial harm vignette (high in harm, low in disgust), and (b) disgust sensitivity was associated with the decision to criminalize”.

doi.org/10.1057/s41599-024-028

#law#xPhi#xJur

What brain areas have been particularly active in deliberate, reflective thinking?

An “activation likelihood estimation (ALE) #metaAnalysis [that] investigate[d] the neural foundation of the dual-process theory of thought …converged on the medial frontal cortex, superior frontal cortex, anterior cingulate cortex, insula, and left inferior frontal gyrus.”

doi.org/10.3390/brainsci140101

How should we engage in controversial public debates?

Like @jayvanbavel did (on other sites) today: with relevant arguments and evidence.

"People strongly condemn plagiarizers who steal credit for ideas, even when the theft in question does not appear to harm anyone"

doi.org/10.1111/cogs.12500

"These negative reactions are driven by people's aversion toward agents who attempt to falsely improve their reputations"

Switched servers so reposting an introduction

I am a #moral #psychologist (in @UL ) interested in the cognitive processes underlying our moral judgments. Also interested in #ecological #psychology and understanding #behaviour in #context. #openscience advocate and #rstats #r user.

Ouside of work I play #music including #guitar #harmonica and #songwriting. I enjoy #chess and #running (but have been inconsistent of late) and #walking
I also have 2 #cats

Outstanding #dissertation!

Trivia #games reduced #polarization?!

Playing the game with a partner that disagree about #politics improved people's feelings towards the out-party for up to 4 months!
- even one-player versions of the game worked!
- adding *political* trivia questions *helped*!

"As one participant put it, the game... 'gives me further hope that we can work with others no matter what divides us.'"

dash.harvard.edu/handle/1/3737

Is it wrong to eat meat? End a pregnancy? Buy luxury goods?

A Moral Philosophy course changed students' views about these questions AND the degree to which students reported answering based on "deliberation/analysis" or "intuition/emotion".

But it was reduced reliance on intuition/emotion (not increased reliance on deliberation/analysis) that explained changes in students' ethical views!

doi.org/10.1016/j.cognition.20

Students' moral judgments of trolley problem decisions seemed to depend not only on the decision (cause some harm or don't), but also the decision-maker (human vs. AI), and the problem type (standard vs. footbridge).

(Ns ≅ 200)

doi.org/10.3390/bs13020181

MDPIMoral Judgments of Human vs. AI Agents in Moral DilemmasArtificial intelligence has quickly integrated into human society and its moral decision-making has also begun to slowly seep into our lives. The significance of moral judgment research on artificial intelligence behavior is becoming increasingly prominent. The present research aims at examining how people make moral judgments about the behavior of artificial intelligence agents in a trolley dilemma where people are usually driven by controlled cognitive processes, and in a footbridge dilemma where people are usually driven by automatic emotional responses. Through three experiments (n = 626), we found that in the trolley dilemma (Experiment 1), the agent type rather than the actual action influenced people’s moral judgments. Specifically, participants rated AI agents’ behavior as more immoral and deserving of more blame than humans’ behavior. Conversely, in the footbridge dilemma (Experiment 2), the actual action rather than the agent type influenced people’s moral judgments. Specifically, participants rated action (a utilitarian act) as less moral and permissible and more morally wrong and blameworthy than inaction (a deontological act). A mixed-design experiment provided a pattern of results consistent with Experiment 1 and Experiment 2 (Experiment 3). This suggests that in different types of moral dilemmas, people adapt different modes of moral judgment to artificial intelligence, this may be explained by that when people make moral judgments in different types of moral dilemmas, they are engaging different processing systems.