Skip to ContentSkip to Navigation
Over ons Praktische zaken Waar vindt u ons A. (Arianna) Bisazza, PhD

Publicaties

A Primer on the Inner Workings of Transformer-based Language Models

Are Character-level Translations Worth the Wait? Comparing ByT5 and mT5 for Machine Translation

BabyLM Challenge: Exploring the Effect of Variation Sets on Language Model Training Efficiency

Democratizing Advanced Attribution Analyses of Generative Language Models with the Inseq Toolkit

Encoding of lexical tone in self-supervised models of spoken language

Endowing Neural Language Learners with Human-like Biases: A Case Study on Dependency Length Minimization

Model Internals-based Answer Attribution for Trustworthy Retrieval-Augmented Generation

NeLLCom-X: A Comprehensive Neural-Agent Framework to Simulate Language Learning and Group Communication

Non Verbis, Sed Rebus: Large Language Models Are Weak Solvers of Italian Rebuses

The SIFo benchmark: Investigating the sequential instruction following ability of large language models

Lees meer

Pers/media

Can Word-level Quality Estimation Inform and Improve Machine Translation Post-editing?