Semantic Role Extraction in Law Texts: A Comparative Analysis of Language Models for Legal Information Extraction

Roos M. Bakker

TNO; Leiden University

Juul A. Schoevers

TNO

Romy A.N. van Drie

TNO

Marijn P. Schraagen

Utrecht University

Maaike H. T. de Boer

TNO

Norms are essential in our society: they dictate how individuals should behave and interact within a community. They can be written down in laws or other written sources. Interpretations often differ; this is where formalisations offer a solution. They express an interpretation of a source of norms in a transparent manner. However, creating these interpretations is labour intensive. Natural language processing techniques can support this process. Previous work showed the potential of transformer-based models for Dutch law texts. In this paper, we 1) introduce a dataset of 2335 English sentences annotated with legal semantic roles conform the Flint framework; 2) fine-tune a collection of language models on this dataset, and 3) query two non-fine-tuned generative large language models (LLMs). This allows us to compare performance of fine-tuned domain-specific, task-specific, and general language models with non-fine-tuned generative LLMs. The results show that models fine-tuned on our dataset have the best performance (accuracy around 0.88). Furthermore, domain-specific models perform better than general models, indicating that domain knowledge is of added value for this task. Finally, different methods of querying LLMs perform unsatisfactorily, with maximum accuracy scores around 0.6. This indicates that for specific tasks, such as this adaptation of semantic role labelling, the process of annotating data and fine-tuning a smaller language model is preferred over querying a generative LLM, especially when domain-specific models are available.
© 2024 CLIN 34 Organisators. All rights reserved. Contact us via email.