lingo.lol is one of the many independent Mastodon servers you can use to participate in the fediverse.
A place for linguists, philologists, and other lovers of languages.

Server stats:

54
active users

#llama

2 posts1 participant0 posts today
Continued thread

One of the biggest advantages of Ollama over llama.cpp is its ability to automatically unload models. It removes the model from vram after 5 mins of being idle and that's very useful.

I'm considering to test llama.cpp and see if I prefer it over Ollama.
The good thing about Ollama is the ease of use, but it's terrible when it comes to model format. Meanwhile llama.cpp even though more complex, it simply works with gguf files, which makes things much easier.
Also in terms of performance, llama.cpp is supposed to be better than Ollama. But I have to try it to know for sure.

Good read: Beware of AI model collapse!

In an AI model collapse, AI systems, which are trained on their own outputs, gradually lose accuracy, diversity, and reliability. This occurs because errors compound across successive model generations, leading to distorted data distributions and "irreversible defects" in performance. The final result? A Nature 2024 paper stated, "The model becomes poisoned with its own projection of reality." nature.com/articles/s41586-024

"In simpler terms, when AI is trained on its own outputs, the results can drift further away from reality." as stated by Aquant.

Full article here: theregister.com/2025/05/27/opi #AI #AIErrors #AI_Model_Collapse #LLMs #ChatGPT #Llama #Claude #TheRegister