Welcome to our homepage! InCLow stands for Interpretable, Cognitively inspired, Low-resource Language Models.

We are a thriving research team led by Arianna Bisazza, working at the intersection of natural language processing and (cognitive) linguistics. Our projects revolve around two key questions:

  • How to develop reliable language technologies that work well for a large variety of languages?

  • What can we learn about human language while attempting to do that?

We’re proudly part of the larger Computational Linguistics group (GroNLP) at the University of Groningen, the Netherlands.

Take a look at our team members, publications, and reading group agenda!

Latest News

[Jun 2025] Arianna gave a talk at the COLT Symposium, organized by Gemma Boleda and Marco Baroni at UPF, Barcelona. This was followed by a round-table on the convergence (or lack thereof) between findings coming from the fields of experimental and theoretical linguistics, neuroscience and the study of large language models.

[May 2025] Arianna and Gabriele were interviewed by Groningen’s Oog Radio station. The questions covered (of course) AI, but also the GroNLP group, what makes this research field so exciting (and a bit crazy nowadays!)… and a bit of music :) Recording available here: HappyHourFM.

[May 2025] Jaap gave a talk in Edinburgh on MultiBLiMP: A Multilingual Benchmark of Linguistic Minimal Pairs.

[May 2025] Master student Anais Almendra from the University of Chile joined us for a 3-month visit. She works on morphological analysis of the endangered language Madupungun.

[April 2025] Gabriele presented his research on word-level quality estimation for post-editing and context attribution for language models at the Multilinguality and Language Technology group of DFKI Saarland and the DEEL/FOR teams at IRT Saint-Éxupery.

[April 2025] Jirui gave two talks at KPN and University of Amsterdam on three topics on Retrieval Augmented Generation (MIRAGE, likelihood gauges answer accuracy, and Consistency in Multilingual Context Utilization)

[Mar 2025] Arianna gave a keynote talk at NoDaLiDa/Baltic-HLT 2025, titled “Not all Language Models need to be Large: Studying Language Evolution and Acquisition with Modern Neural Networks”.

[Feb 2025] PhD student Akari Haga from NAIST, Japan, joined us for a 6-month visit. She works on BabyLM-style models for Japanese.