LLMs outperform outsourced human coders on complex textual analysis
Other authors
Publication date
2025-11-17ISSN
2045-2322
Abstract
This paper evaluates the effectiveness of large language models (LLMs) in extracting complex information from text data. Using a corpus of Spanish news articles, we compare how accurately various LLMs and outsourced human coders reproduce expert annotations on five natural language processing tasks, ranging from named entity recognition to identifying nuanced political criticism in news articles. We find that LLMs consistently outperform outsourced human coders, particularly in tasks requiring deep contextual understanding. These findings suggest that current LLM technology offers researchers without programming expertise a cost-effective alternative for sophisticated text analysis.
Document Type
Article
Document version
Published version
Language
English
Keywords
Pages
19 p.
Publisher
Springer Nature
Is part of
Scientific Reports, Vol. 15, 40122
Recommended citation
This citation was generated automatically.
This item appears in the following Collection(s)
Rights
© L'autor/a
Except where otherwise noted, this item's license is described as http://creativecommons.org/licenses/by-nc-nd/4.0/


