BabyBabelLM: A Multilingual Benchmark of Developmentally Plausible Training Data

Abstract

We present BabyBabelLM, a multilingual collection of datasets modeling the language a person observes from birth until they acquire a native language. We curate developmentally plausible pretraining data aiming to cover the equivalent of 100M English words of content in each of 45 languages. We compile evaluation suites and train baseline models in each language. BabyBabelLM aims to facilitate multilingual pretraining and cognitive modeling.

Publication
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Jaap Jumelet
Jaap Jumelet
Post-Doc
Akari Haga
Akari Haga
PhD Student
Arianna Bisazza
Arianna Bisazza
Associate Professor