lingo.lol is one of the many independent Mastodon servers you can use to participate in the fediverse.
A place for linguists, philologists, and other lovers of languages.

Server stats:

64
active users

#explainableai

0 posts0 participants0 posts today

CRISIS IN MACHINE LEARNING - Semantics to the Rescue
Frank van Harmelen starts his keynote at ISWS 2025 with this headline from "The AI Times"
So, what is this crisis about? There are the following (still) unsoloved problems in AI research:
- Learning from small data
- Explainable AI
- Updating
- Learning by explaining

#isws2025#llms#AI

Applications are now open for the 2025 International Semantic Web Research Summer School - #ISWS2025
in Bertinoro, Italy, from June 8-14, 2025
Topic: Knowledge Graphs for Reliable AI
Application Deadline: March 25, 2025
Webpage: 2025.semanticwebschool.org/

Great keynote speakers: Frank van Harmelen (VU), Natasha Noy (Google), Enrico Motta (KMI)

#semanticweb #knowledgegraphs #AI #generativeAI #responsibleAI #explainableAI #reliableAI @albertmeronyo @AxelPolleres @lysander07

Befuddled by all the recent #DeepSeek hullabaloo? Here's a brief Q&A that cuts through the fog.

Q: Did #DeepSeek just up-end everything we know about #AImodels and #LLMs?
A: Nope. It just demonstrates one of several new approaches to model training and logic chaining, but still uses the same basic building blocks.

Q: Does this mean DeepSeek can think?
A: Nope. Still not #Skynet. Logic chains are just one of several techniques an instruction-oriented AI system can use to try to stay on track and focus on a coherent goal.

Q: Is logic chaining #ExplainableAI?
A: Nope. Even the "thinking" output of DeepSeek is a linguistic approximation of the pattern-seeking behavior of most LLMs.

Q: Why is everyone in an uproar about DeepSeek?
A: Because most people think ChatGPT defines what AI is, what it can do, and what its limitations are.

Q: Why are the people panicking about DeepSeek talking about AI hegemony and geopolitics?
A: Because they're more concerned with investment returns or charging for expensive GPUs and SaaS services than they are in scientific advancements or improving individual productivity with new technology.

This Friday at 12.15 CEST I'll be hosting a talk by computer scientist Kary Främling, in which he will present his work on Explainable AI techniques for producing explanations useful to a variety of stakeholders, rather than only to AI experts.

The talk is hybrid, so even if you are not currently enjoying the very colourful (and very wet) autumn in Umeå, you can join nonetheless!

More information here: umu.se/en/events/fraiday-socia

All welcome!

www.umu.se#frAIday: Social Explainable AI - What is it and how can we make it happen?

Are you working on knowledge extraction from domain-specific sources? 🧠 Tackling the challenges of long-tail knowledge? 🌐 Combining #LLMs and #KnowledgeGraphs to push the boundaries of your research?

1st Workshop on X-TAIL: eXtraction and eXploitation of long-TAIL Knowledge with LLMs and KGs,
co-located with #EKAW2024

Abstract Deadline: September 8th, 2024
xtail-workshop.org/

@ekawconference #rag #generativeai #explainableAI #longtail #cfp

I actually had someone reach out to me today about starting a company around some of my previous work on "explainable AI" -- machine learning that results in human-interpretable results, as opposed to the inscrutable tables of numbers that make up neural networks. This is pretty exciting stuff to me, because I've wanted to start a business along these lines for a long time. I also got to introduce someone to the concept of "exit to community", so +1 for social activism, too. And the timing couldn't be better, given the layoff I was just caught by.

#startup
#ExitToCommunity
#e2c
#ExplainableAI
#MachineLearning

📢 5 days till deadline for the Human-centered Explainable AI (#HCXAI) workshop at CHI! Pls repost & help us spread the word 🙏

Submission pro tips:

1. Explicitly address >1 question (CfP) on the website.

2. Yes, papers NOT dealing with LLMs are fine.

3. Engage w/ past submissions (build on, don't repeat)

4. Position papers must make a well-justified argument, not just summarize findings

💌 w/ @Riedl @sunniesuhyoung

#academia #AI #hci #XAI #ExplainableAI

hcxai.jimdosite.com/

HCXAIHome | HCXAIACM CHI 2023 Workshop on Human-Centered Explainable AI (HCXAI)

This is the International SemanticWeb Research Summer School (ISWS) tooting! ISWS is a full immersion, super intensive one-week experience including lectures and keynotes from outstanding speakers, a “learning by doing” teamwork program on open research problems, under the guidance of the best scientists in the field.
website: 2024.semanticwebschool.org/

#semanticweb #knowledgegraphs #summerschool #llms #generativeai #explainableAI #introduction
@lysander07 @tabea @sashabruns @MahsaVafaie @fizise

Replied in thread

@3_qrx

Damn, we must not allow Elon Musk to hijack the #xAI hashtag! XAI stands for Explainable AI, an active field of research that develops #machineLearning algorithms that can explain their decisions to humans (in contrast to, say, today's #deepLearning models).

Only explainable AI satisfies reasonable privacy and safety requirements. Some explainable #ML algorithms have existed for a long time (e.g. decision tree learning) and should be preferred over e.g. artificial neural networks that are not XAI.

/cc @edri @eff @heiseonline @digitalcourage

Upcoming online talk on #PhilosophyOfAI:

Carlos Zednik discusses the question 'Does Explainable AI Need Cognitive Models?', this Friday at 12:15 (CEST), as part of the talk series organised by the Umeå Centre for Transdisciplinary AI.

For more details, including the abstract and how to register, follow this link: umu.se/en/events/does-explaina

www.umu.seDoes Explainable AI Need Cognitive Models?