Serhii Nazarovets<p>At <a href="https://mstdn.science/tags/ISSI2025" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ISSI2025</span></a>, Mike Thelwall suggested that the Leiden Manifesto should be updated to reflect the rise of large language models (LLMs) in research assessment.</p><p>👉 <a href="https://issi2025.iiap.sci.am/proceedings/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">issi2025.iiap.sci.am/proceedin</span><span class="invisible">gs/</span></a> (P. 71-80)</p><p>He proposed four new principles — calling for transparent prompts, awareness of <a href="https://mstdn.science/tags/LLM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLM</span></a> instability, cost-benefit considerations, and a reminder that LLM scores are not evidence of scientific contribution.</p><p><a href="https://mstdn.science/tags/Scientometrics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Scientometrics</span></a> <a href="https://mstdn.science/tags/ResponsibleMetrics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ResponsibleMetrics</span></a> <a href="https://mstdn.science/tags/OpenScience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenScience</span></a> <a href="https://mstdn.science/tags/ImpactFactor" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ImpactFactor</span></a></p>