Comparing Language Models and Student Strategies for Solving Multiple Choice Questions

Leonidas Zotos, Malvina Nissim, Hedderik van Rijn

University of Groningen

An impressive achievement of Large Language Models (LLMs) in recent times has been their ability to accurately answer complex questions using the knowledge they have incorporated from heaps of training data as a byproduct of the language modeling objective. A particularly impressive feat is that of performing on par or better in factual knowledge exams in comparison to students, or even experts. The present work also investigates factual knowledge in LLMs, with a focus on Multiple-Choice Questions (MCQs) and comparison to student performance. In this contribution, we present results on measuring model uncertainty when answering MCQs as part of the current phase of the project aimed at uncovering answering strategies, and outline the central research that this work will follow.
© 2024 CLIN 34 Organisators. All rights reserved. Contact us via email.