Reading more and more about #LLM inherent limitations, either from design or data *scarcity*[^1], I can't help but notice that they perfectly illustrate Michael Polanyi's vision of scientific knowledge as an construction rising from what is actually written and publicly communicated among peers, what is not written and still communicated among peers (direct teaching, empirical recipes...), and what is not written nor taught, that is personal knowledge.[^2]
This also extends to some degrees to other constructed knowledge and is one the reasons the current #GenAI goals (or at least the one people think Generative AI is made for) are doomed to fail. If AI models can only interact with written data and can't make "inner working models" of the world, they're basically useless as sources of insights. If they analyze only textual description through their statistical training without taking context into account, they can't/don't interact with described material, and hence can't make any relevant "inner working models". This is something that I see more and more when several different people conclude again and again that training data and parameters are more important when summing up a text than the actual content of the text to sum up.
This convinces me that the best way to fight genAI brute force enforcement in institutions is to teach people how they work and give them all the tools to provide the counter-arguments against it and its false promises. It can't replace people so it won't replace peoples' work and that is something we have to tell again and again to all the technophilic bigots out there.
Thought inspired by the last piece from @Iris. Thanks for sharing.
[^1]: Pun intended. They poured the whole internet in their shit. So now what?
[^2]: A good read: https://press.uchicago.edu/ucp/books/book/chicago/P/bo19722848.html