ALTA<p>We'll shortly be announcing our Shared Task for <a href="https://sigmoid.social/tags/ALTA2024" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ALTA2024</span></a>. </p><p>In the meantime, we are sharing the 3rd-placed winner of the <a href="https://sigmoid.social/tags/ALTA2023" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ALTA2023</span></a> Shared Task, which was to distinguish <a href="https://sigmoid.social/tags/LLM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLM</span></a>-generated from human-generated text. </p><p>Here, Ahmed El-Sayed and Omar Nasr of the Arab Academy for Science and Technology use an <a href="https://sigmoid.social/tags/ensemble" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ensemble</span></a> approach of many models to perform well on both the dev and test sets. </p><p><a href="https://aclanthology.org/2023.alta-1.20/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">aclanthology.org/2023.alta-1.2</span><span class="invisible">0/</span></a></p>