2024

Arehalli, Suhas and Linzen, Tal. Neural networks as cognitive models of the processing of syntactic constraints. Open Mind. paper

Huang, Kuan-Jung, Arehalli, Suhas, Kugemoto, Mari, Muxica, Christian, Prasad, Grusha, Dillon, Brian, and Linzen, Tal. Large-scale benchmark yields no evidence that langauge model surprisal explains syntactic disambiguation difficulty.. JML. paper

Arehalli, Suhas and Linzen, Tal. Syntactic Effects on Agreement Attraction in Vocab-Limited Reading Experiments. HSP 2024. abstract

Timkey, Will, Arehalli, Suhas, Huang, Kuan-Jung, Prasad, Grusha, Dillon, Brian, and Linzen, Tal. Large-scale eye-tracking when reading benchmark shows suprisal captures early fixations, not regressions. HSP 2024.

2023

Kobzeva, Anastasia, Arehalli, Suhas, Linzen, Tal, and Kush, Dave (2023) “Neural Networks Can Learn Patterns of Island-insensitivity in Norwegian,” SCiL 2023. paper.

2022

Arehalli, Suhas, Dillon, Brian and Linzen, Tal. Syntactic Surprisal from Neural Models Predicts, but Underestimates, Human Processing Difficulty From Syntactic Ambiguities. CoNLL 2022. arXiv, ACL Anthology. Distinguished Paper.

Kobzeva, Anastasia, Arehalli, Suhas, Linzen, Tal, and Kush, Dave. LSTMs can learn basic wh- and relative clause dependencies in Norweigan. CogSci 2022. paper

Arehalli, Suhas, Dillon, Brian, and Linzen, Tal. Syntactic Surprisal from Neural Language Models tracks Garden Path Effects. HSP 2022. poster

Huang, Kuan-Jung, Arehalli, Suhas, Kugemoto, Mari, Muxica, Christian, Prasad, Grusha, Dillon, Brian, and Linzen, Tal. SPR mega-benchmark shows surprisal tracks construction- but not item- level difficulty. HSP 2022. talk

Kobzeva, Anastasia, Arehalli, Suhas, Linzen, Tal, Kush, Dave. What can an LSTM language model learn about filler-gap dependencies in Norwegian?. HSP 2022. poster

2021

Arehalli, Suhas, Linzen, Tal, and Legendre, Geraldine. Syntactic intervention cannot explain agreement attraction in English wh-questions. AMLaP 2021. short talk

Arehalli, Suhas and Wittenberg, Eva. Experimental Filler Design Influences Error Correction Rates in a Word Restoration Paradigm. Linguistics Vanguard. paper

2020

Arehalli, Suhas and Linzen, Tal. Neural language models capture some, but not all, agreement attraction phenomena. Annual Meeting of the Cognitive Science Society, 2020. paper

Arehalli, Suhas and Linzen, Tal. Neural language models capture some, but not all, agreement attraction phenomena. CUNY Conference on Human Sentence Processing, 2020. poster

2018

Arehalli, Suhas and Wittenberg, Eva. Your Ears or Your Brain: Noise structure can hide grammatical preferences. AMLaP 2018. talk

Arehalli, Suhas and Wittenberg, Eva. The Mess Reveals the System: People use top down cues to resolve errors in contexts with highly random noise, but not with highly structured noise. CUNY 2018. poster

2017

Arehalli, Suhas and Wittenberg, Eva. The Mess Reveals the System: People use top down cues to resolve errors in contexts with highly random noise, but not with highly structured noise. California Meeting on Psycholinguistics. talk