About Me
Hello! I’m Suhas Arehalli, an Assistant Professor of Computer Science at Macalester College. Previously, I completed my PhD at the Cognitive Science Department at Johns Hopkins University working with Tal Linzen.
My interests are at the intersection of Sentence Processing, Computational Linguistics, and Theoretical Syntax: How are syntactic constraints enforced during real-time sentence processing, and how can modern computational modeling techniques help us generate and evaluate potential answers to that question? As a result, my research involves human subjects experiments that characterizes how humans process (or fail to process) language and computational modeling work that investigates what training and architectural choices lead to neural language models that approximate those human characteristics.
Most recently, I’ve been interested in syntactic agreement and agreement attraction effects: What do the patterns of agreement attraction tell us about the mechanisms underlying syntactic processing in production and comprehension? How do the different constraints and goals of production and comprehension influence the role agreement plays in processing, and how do differences in that role manifest in agreement attraction? What inductive biases, whether architecture or task, lead a learner to exhibit the patterns of errors we see? What representational assumptions (i.e., syntactic formalisms/analyses) best explain the agreement processing characteristics of humans and our models?
In addition to research, I’m passionate about education, mentorship, and outreach. In particular, I’m interested in the relationship between the machine learning techniques I use in my research, ethics, and public policy. Despite the growing role machine learning has in today’s society, understanding how machine learning works typically requires a high bar of technical background. Even among experts, the overlap between those with relevant domain expertise (typically a background in social science or the humanities) and the relevant technical expertise is prohibitive to properly understanding the risks and ethical implications of machine learning in the wild. To address this, I’m interested in both making a working understanding of machine learning techniques accessible to non-technical audiences and making the relevant technical expertise necessary for analyzing and interpreting the behavior of machine learning models accessible to experts with minimal computational background.