Ribeiro, EugénioAntunes, DavidMamede, NunoBaptista, Jorge2026-04-292026-04-292025-08-211678-4804http://hdl.handle.net/10400.1/28799The automatic assessment of text complexity has an important role to play in the context of language education. In this study, we shift the focus from L2 learners to adult native speakers with low literacy by exploring the new iRead4Skills dataset in European Portuguese. Furthermore, instead of relying on classical machine learning approaches or fine-tuning a pre-trained language model, we leverage the capabilities of prompt-based Large Language Models (LLMs), with a special focus on few-shot prompting approaches. We explore prompts with varying degrees of information, as well as different example selection approaches. Overall, the results of our experiments reveal that even a single example significantly increases the performance of the model and that few-shot approaches generalize better than fine-tuned models. However, automatic complexity assessment is a difficult and highly subjective task that is still far from solved.engText complexityReadabilityFew-shot promptingLarge language modelsExploring few-shot approaches to automatic text complexity assessment in european portuguesejournal article10.5753/jbcs.2025.5820