lingo.lol is one of the many independent Mastodon servers you can use to participate in the fediverse.
A place for linguists, philologists, and other lovers of languages.

Server stats:

61
active users

#HigherEducation

5 posts5 participants0 posts today

I talked to @404mediaco about generative AI's impact on teaching. Apparently I wasn't alone... not by a long shot. @jasonkoebler got a ton of responses about this topic and ran many of them here:

404media.co/teachers-are-not-o

While it's distressing to read all of them, it's good to see I'm not alone.

404 Media · Teachers Are Not OKAI, ChatGPT, and LLMs "have absolutely blown up what I try to accomplish with my teaching."

🕰️ From chalkboards to smartboards, PHI Learning has stood the test of time—62 years of helping society progress through academia. On 6th June, 2025, PHI Learning turns 62!

On this proud occasion, we invite you to register for our Anniversary Webinar #Estd1963!

Link to register: docs.google.com/forms/d/e/1FAI

From Holden Thorpe’s Editorial in the May 8, 2025 issue of #Science , quoting Danielle Allen, #Harvard Professor of Political ‘Science’, #Democracy and #Philosophy who argues for a new “‘horizontal’ relationship between #universities and the #American people - one that is based on mutual respect and reciprocity. She also makes a compelling call for universities to do a better job of expressing appreciation for the support of #HigherEducation in the decades since #WorldWar2.”

#HostageLetter ?

Continued thread

#Gleichschaltung #HigherEducation

"For those of us shaped in the revolutionary democratic spirit of the sixties, it is both painful and disheartening to witness the rise of fascism in the U.S. and the slow, tragic unraveling of democracy around the world. Decades of neoliberalism have relentlessly eroded higher education, with a few notable exceptions. The once-cherished notion that the university is a vital advocate for democracy and the public good now seems like a distant memory. What we face today is the collapse of education into mere training, an institution dominated by regressive instrumentalism, hedge-fund administrators, and the growing threat of transforming higher education into spaces of ideological conformity, pedagogical repression, and corporate servitude."

laprogressive.com/education-re

Continued thread

Full-Time Government Instructor, Faculty
Full-Time Mathematics Instructor, Faculty
Full-Time Developmental Mathematics Instructor, Faculty
Full-Time Accounting Instructor, Faculty
Full-Time HVAC Instructor, Faculty
Full-Time Auto Mechanic Instructor, Faculty
Full-Time Welding Instructor, Faculty
Website Content Editor
Producer at HCCTV

hccs.referrals.selectminds.com

HCC CareersJob Opportunities at Houston Community CollegeMy company has a lot of open positions! If you are interested, click this link. If you apply to a job, you will be treated as a referral from me.

Houston Community College System is hiring for:

Full-Time English Instructor, Faculty
Full-Time Intensive English/ESL Instructor, Faculty
Full-Time Art Instructor, Faculty
Full-Time Art/History Instructor, Faculty
Full-Time Engineering Instructor, Faculty

hccs.referrals.selectminds.com

HCC CareersJob Opportunities at Houston Community CollegeMy company has a lot of open positions! If you are interested, click this link. If you apply to a job, you will be treated as a referral from me.

An observation on AI hype in higher education.

My university (Russell group) is part of the hype. In committees, I found myself isolated when raising criticism. Endless seminars and workshops on how to make innovative uses of genAI in teaching, in assessment, in tutoring. Revolting. I decided not to fight an unwinnable battle and instead I merely protect my own courses from the poison.

Just a few months later, the tone has changed. Many of the most excited pro-AI voices now share articles that have "bullshit" in the title, or share AI-critical messages they sent to their students.

Reality always wins.

GenAI corrupts students' learning, and sooner or later this can no longer be ignored. These enthusiastic colleagues re-think AI once they have marked 100 essays of inauthenticity.

This may not yet be the bubble bursting, but it's an encouring sign and much sooner than expected. And it goes some way to restore my faith in the profession.

How are students using Generative AI in UK universities?

Honestly I’m not sure how worried we should be about these findings from HEPI (n=1,041) given it seems the sector has got passed its initial inclination to try and prohibit. If we’re in a situation where only 12% of students are not using LLMs in their assessment then what matters is steering use towards epistemic agency* and way from LLMs supporting a turbo-charged transactional engagement with knowledge.

It’s interesting to contrast these findings with Anthropic’s study of university students using Claude, classified in terms of Bloom’s taxonomy:

The dynamics of cognitive outsourcing (and potential lock-in) differ as you move up from lower to higher-order thinking skills for students. I struggle to see a problem with students using LLMs to support understanding materials, much as I struggle to see a problem with academics using LLMs to produce materials which are easier to understand. Sure we might rapidly end up in a situation where this learning interaction is mediated by LLMs by default but I don’t see a fundamental difference in type from that being mediated by other kinds of digital platforms (e.g. the LMS) or outputs (e.g. Powerpoint). It’s a case of better or worse design rather than something human being lost through the introduction of a technological element.

I think applying and analysing by definition lend themselves to agentive engagements with knowledge. You can’t get the LLM to do something useful unless you’re thinking about what you’re asking, which means to at least some extent an epistemic capacity is being exercised. Certainly students could try and fail to do this, but that’s a different kind of problem to be addressed through the register of AI literacy. The pedagogical challenge comes in recognising how students are doing this in order to design learning processes which support increasingly purposive applications rather than just assuming they will be learning in the same way we did.

It’s evaluating and creating where it gets more concerning. If you’ve already developed these capabilities LLMs can be used to speed up the process (though a soft lock-in might result over time) or enhance the process in the activity I describe as rubber ducking. The problem arises if you haven’t learned how to do this without the LLM, such that the composite capacity (e.g. writing a report) develops in a way that has the LLM baked into it from the outset. For example reliance on LLMs for an outline only concerns me if students haven’t learned to do this without the LLM in the first place. To rely on it to critically evaluate your work and suggest room for improvement carries a similar risk of cognitive outsourcing which is unlikely to be addressed after university by most students.

This is a long-winded way of saying that we urgently need to get beyond the category of ‘AI’ in how we think about these pedagogical challenges. The relationality within the LLM becomes more important to recognise the further up the taxonomy we go. Exactly what ‘creating’ means can now vary immensely depending on the pattern of interaction the student has with the LLM.

It’s also interesting to see that:

  • The main factors putting students off using AI are being accused of cheating (said by 53% of respondents) and getting false results or ‘hallucinations’ (51%). Just 15% are put off by the environmental impact of AI tools.
  • Students still generally believe their institutions have responded effectively to concerns over academic integrity, with 80% saying their institution’s policy is ‘clear’ and three-quarters (76%) saying their institution would spot the use of AI in assessments
  • The proportion saying university staff are ‘well-equipped’ to work with AI has jumped from 18% in 2024 to 42% in 2025.

I think students are over-estimating how effectively institutions can identify (and act!) on problematic LLM use and over-estimating the AI literacy of academic staff. If I’m right and student perception catches up to that reality, could ‘cheating’ as an inhibiting factor start to collapse from that figure of 51%?

*Thanks to my collaborator Peter Kahn for introducing me to this notion