about

I am a Postdoctoral Researcher at UC Louvain (Belgium), working within CENTAL (Centre de Traitement automatique du Langage) under supervision of Marie-Catherine de Marneffe. My current research focuses on explainability in the face of label variation, as elicited in both humans and LLMs.
Prior to my current appointment, I was a Postdoctoral Researcher at KU Leuven (Belgium), working within the LAGoM-NLP group led by Miryam de Lhoneux. There, I co-led a large-scale audit of Wikipedia as an NLP resource and developed the texieve Multilingual NLP toolkit.
Before that, I earned my PhD in computational linguistics at Uppsala University (Sweden), supervised by Joakim Nivre and Anders Søgaard. My dissertation focused on the syntactic knowledge encoded by language models, investigated through the lens of dependency parsing (available here).
Before my PhD, I completed the Erasmus Mundus Language and Communication Technology program, spending my first year at the University of Groningen (Netherlands) and my second at the the University of the Basque Country (Spain).
I am from Western Massachusetts, USA.
publications
- K Tatariya, A Kulmizev*, W Poelman, E Ploeger, M Bollmann, Johannes Bjerva, J Luo, H Lent, M de Lhoneux How Good is Your Wikipedia? Auditing Data Quality for Low-resource and Multilingual NLP. Preprint, under review.
- A Kulmizev, J Nivre: Investigating UD Treebanks via Dataset Difficult Measures. EACL 2023. Dubrovnik, Croatia.
- M Abdou, V Ravishankar, A Kulmizev, A Søgaard: Word Order Does Matter and Shuffled Language Models Know It. ACL 2022. Dublin, Ireland.
- A Kulmizev, J Nivre: Schrödinger’s Tree – On Syntax and Neural Language Models. Frontiers in Artificial Intelligence.
- M Abdou, A Kulmizev, D Hershcovich, S Frank, E Pavlick, A Søgaard: Can Language Models Encode Perceptual Structure Without Grounding? A Case Study in Color. CoNLL 2021. Punta Cana, DR.
- Z Luo, A Kulmizev, X Mao: Positional Artefacts Propagate Through Masked Language Model Embeddings. ACL 2021. Digital.
- V Ravishankar, A Kulmizev*, M Abdou, A Søgaard, J Nivre: Attention Can Reflect Syntactic Structure (If You Let It). EACL 2021. Digital.
- A Kulmizev, V Ravishankar, M Abdou, J Nivre: Do Neural Language Models Show Preferences for Syntactic Formalisms?. ACL 2020. Digital.
- A Kulmizev, M de Lhoneux, J Gontrum, E Fano, J Nivre: Deep Contextualized Word Embeddings in Transition-Based and Graph-Based Dependency Parsing – A Tale of Two Parsers Revisited. EMNLP 2019. Hong Kong.
- M Abdou, A Kulmizev, F Hill, D Low, A Søgaard: Higher-order Comparisons of Sentence Encoder Representations. EMNLP 2019. Hong Kong.
- M Abdou, A Kulmizev, V Ravishankar, L Abzianidze, J Bos: What can we learn from Semantic Tagging?. EMNLP 2018. Brussels, Belgium.
- M Abdou, A Kulmizev, V Ravishankar: [MGAD: Multilingual Generation of Analogy Datasets](https://www.aclweb.org/anthology/L18-1320.pdf]. LREC 2018. Miyazaki, Japan.
* equal contribution
