Computational Psycholinguistics Lab

Sunday February 10th 2013, 10:01 pm
Filed under:

The unifying theme of our lab is fine-grained modeling and empirical investigation of how people are able to acquire and use language, combining ideas and tools from linguistics, artificial intelligence, psychology, and other areas of cognitive science.  Our research characteristically involves a combination of computational modeling, analysis of large and complex datasets (e.g., linguistic corpora), and behavioral experimentation. Our research is supported by:

NSF LogoNIH LogoSloan Logo

Our active research projects include:

Probabilistic expectations in human sentence comprehension

Language comprehension is harder than it seems — sentences are ambiguous, human memory is limited, and other aspects of our environment compete for our attention. Although misunderstandings are not infrequent, it’s remarkable that most of the time, we understand the sentences we hear to mean approximately what our interlocuters intend them to mean. There’s now a great deal of evidence that a major factor accounting for our success is the use of probabilistic knowledge to form expectations about what sentences mean and also about what are interlocuters are likely to say in the future. This helps us to respond to our input efficiently and to understand it accurately. A major project in our lab is a detailed explication of these ideas. One of the central concepts in this project is the idea of surprisal — that our probabilistic expectations guide our allocation of resources in a way that optimally prepares us to deal with linguistic input. We study both the empirical coverage of the surprisal theory and its theoretical underpinnings in optimality-based analyses of linguistic cognition.

Optimization in human language production

One consequence of the expressive richness of natural languages is that usually more than one means exists of expressing the same (or approximately the same) message. As a result, speakers are often confronted with choices as to how to structure their intended message into an utterance. If language users are rational, they might choose to structure their utterances so as to optimize communicative properties. We have focused on two major types of optimization in natural language production: (1) information-theoretic and psycholinguistic considerations suggest that this may include maximizing the uniformity of information density in an utterance. We have shown evidence for this principle of Uniform Information Density in the relative clauses of spontaneous spoken American English, using a combination of computational models, corpus analysis, and behavioral experimentation. We are also investigating other types of grammatical choice phenomena, such as word order variation, for evidence of Uniform Information Density. (2) Psycholinguistic evidence suggests that memory constraints are an important bottleneck in language processing, and the minimization of word-word dependency distances may be an important factor in utterance optimality. We are engaged in work on efficiently computing minimal-distance linearizations of dependency trees subject to the universal structural constraints observed in natural language, and evaluating how closely observed structures in natural language corpora adere to these optimal linearizations.

Environmental and cognitive limitations in language comprehension

Although language comprehension may be abstractly best characterized as a case of pure computational-level probabilistic inference, there are a number of limitations on the types of computational inferences that should be taken into account. First, language comprehension, as with all other cases of the extraction of meaningful structure from perceptual input, takes places under noisy conditions. Second, incremental processing algorithms exist that allow exact probabilistic inference have superlinear run time in sentence length, and impose strict locality conditions on the probabilistic dependence between events at different levels of structure, whereas humans seem to be able to make use of arbitrary features of (extra-)linguistic context in forming incremental expectations. In recent work, we have explored the consequences of imposing limitations in comprehension models on (a) the veridicality of surface input, and (b) the amount of memory available to the comprehender. It turns out that introducing these limitations can actually lead to elegant solutions to several outstanding problems for models of rational language comprehension.

Abstract knowledge versus direct reuse in language processing

Abstract linguistic knowledge uncontroversially plays an integral role in the processing of novel expressions, allowing us to understand sequences of words we have never encountered before. Deploying this abstract knowledge has measurable consequences, as it produces differential processing difficulty according to the grammatical and semantic properties of an utterance. But it remains an outstanding question what role this abstract knowledge plays in the processing of expressions that are not novel. For an expression with which speakers have previous experience, there are two possible processing strategies: the expression could be reprocessed using abstract knowledge, or it could be stored as a pre-assembled chunk for holistic reuse. One current line of research explores the relative contributions of these two strategies to the processing of non-novel expressions.

Bayesian models of semantics and pragmatics

How is it that we’re able to mean so much more than the literal content of what we say? Trying to answer this question is the study of pragmatics, a field that lies at the intersection of linguistics, psychology, philosophy, and artificial intelligence. In rough strokes, there is broad agreement that the central principles leading to the richness of our utterances’ understood meaning include relation with context, what alternative utterances could have been used but weren’t, and social reasoning about our interlocutors based on their knowledge and goals. In the last few years there has been explosive growth in our understanding of pragmatics fueled by two key developments: new Bayesian and game-theoretic models of speakers and listeners as recursive, decision-theoretic conversational agents, and improvement in the empirical methods used to test pragmatic theories.  We carry out modeling and experimental work that helps us understand how speakers and listeners achieve rich pragmatic inferences and use language to, in effect, solve complex communicative games in novel contexts.

Phonological generalizations in phonotactic learning

Experimental research shows that speakers are capable not only of recognizing whether a nonce form is an acceptable word of the language, but also that speakers assign a gradient level of acceptability both to sound sequences that are attested in the language and those that are not. Modeling those judgments poses a significant challenge because speaker judgments of attested patterns are usually modeled as distributions over the segmental sequences represented in the lexicon whereas judgments of unattested sequences are more effectively modeled as distributions over lexically under-represented natural class sequences. One of the main challenges in understanding generalization in phonotactic learning is the large number of natural class representations that can be used to describe a given data set, leading to a huge space of possible hypotheses that the learner could entertain. Understanding how speakers learn phonological generalizations given this hypothesis space is an important step towards understanding psychological saliency in terms of phonological features and frequency that has broad implications for phonological learning and processing in general.

Learning from multiple types of information

In many settings, a language learner is presented with an array of possible sources and types of information. In learning to segment words, for instance, an infant might seek out common phonemic sequences to treat as proto-words, or use prosodic cues to identify likely word boundaries. These different cues provide differing amounts of information, and in many learning problems, no one source of information is sufficient on its own. Instead, the learner must learn to identify how useful different cues are and balance them to create a mature representation of the linguistic phenomenon. We examine such learning problems and look at ways that increasing the complexity of the task (by using more types of information) can improve the learner’s ability to achieve a mature representation.

The interface between comprehension and eye movement control in reading

We understand sentences piece by piece as we read or hear them, updating incremental representations of the sentence with each new word. When we read, this process of incremental comprehension influences the details of how we move our eyes: for example, we may linger for 50 ms longer on a more difficult word, or skip over a word entirely if it is very predictable given our incremental understanding of the sentence up to that point. This tight relationship between incremental comprehension and eye movements in reading has allowed much to be learned about incremental comprehension from tracking the eyes of readers. Despite this, however, the precise way in which comprehension influences the control of the eyes in reading has remained unclear. Work in our lab has investigated this relationship by combining formal, probabilistic models of incremental comprehension with machine learning techniques in order to derive the optimal ways to control the eyes in service of comprehension.