(last update: May 2017)
I’m interested in understanding how people process language, focusing in particular on the emergence of meaning from interaction. What speakers mean is often underspecified in what they actually say, and I want to understand how listeners infer the missing pieces of the puzzle.
Recently, my main focus in addressing this rather broad question has been on ellipsis, in particular Verb Phrase Ellipsis. In some sense, elliptical utterances represent an extreme form of underspecification, but how the missing information is inferred, or even what level of representation the inference operates on remains highly controversial. My own research on the topic is embedded in a working theory that takes Verb Phrase Ellipsis to be a form of discourse reference, and aims to explain the interaction of the linguistic context with real-world knowledge and domain-general reasoning in determining the meaning of elliptical utterances.
Other than that, I have approached the topic of inferential language comprehension from two other angles: the rational resolution of multiple implicature-driving forces; and a noisy-channel approach to non-literal interpretation. The implicature work is about the resolution of Quantity and Informativeness, two major opposing pressures in language production, which are mirrored as interpretational forces on the side of the listener. Our drosophila in this project are sentences like The man injured a child, which tends to be interpreted as meaning that the injured child was not the man’s own child (presumably by Q-implicature from the inexpensive alternative his child), and The man broke a finger, which typically receives the opposite interpretation: that the broken finger is the man’s own finger. We found that multiple factors interact in determining the interpretion of individual sentences, and are modeling this interaction in Frank and Goodman’s (2012) Rational Speech-Act framework. The noise inference project is exploring the noise-model component of Gibson et al.‘s (2013) noisy-channel model of sentence comprehension. In addition to insertions and deletions, we have experimental evidence that listeners consider the possibility of exchange errors when interpreting utterances, which suggests that their noise model is structure-sensitive.
I am also interested in the acquisition of word meanings (which is the problem that brought me to linguistics in the first place!), though this question has been on the back burner recently. I am developing a growing suspicion that the learning problem is intimately linked to the way interlocutors coordinate their communicative intentions in real time, and I hope that at some point the processing questions I am currently working on will bring me back to word learning.