brains.jpg

Brain signatures of affective phenomena

How are emotions represented in the brain? Are these representations distinct from perceptual and cognitive processes? These questions are at the core of understanding the nature of the mind, but are often based largely on introspection. The goal of this research is to leverage advances in machine learning to develop objective models of human brain activity that can detect the engagement of cognitive and affective processes, and use these models to inform theoretical debates about the mind.

 
system-3699542_1280.jpg

Computational models of human behavior

How are sensory inputs transformed to produce emotional behavior? Are representations learned by computational models consistent with those in the human brain? Can we train a machine to “understand” human emotion? We explore these questions using computational approaches (e.g., machine learning and neural networks) to model human behavior, including self-reports of subjective experience.

Current Projects

  • Influence of Emotions on Decision Making

    In this project, we aim to understand if subjective decision making in emotional contexts is the result of automatic as opposed to deliberative processing. While many previous studies have related dual process models of decision making to learning about rewards and threats, here we manipulate how information about threats, time available for information processing, and the sensory complexity of outcomes influences decision making. We hope to explore a full understanding of the influence of these processes over decisions in emotionally complex stimuli.

  • Neurocomputational Validation of Affective Valence

    Affective valence, or the experience of pleasantness or unpleasantness produced by a stimulus, is a fundamental aspect of mental health and well-being. Valence is often defined in terms of two distinct biobehavioral systems—one underlying approach and other positive behaviors and another for defensive responses to threat and other negative stimuli. These systems likely drive changes in behavior, subjective experience, and brain function linked to mental illness. We explore these systems by developing artificial neural networks that learn to efficiently by developing artificial neural networks that learn emergent representations of valence to efficiently characterize human behavior and brain function.

  • Subcortical Pathways for Looming and Threat

    Rapidly approaching objects, like incoming predators or projectiles, are often dangerous. Many species of animals can detect and avoid the specific patterns of looming visual motion associated with such objects, using quite similar neural mechanisms. For example, mammals encode looming motion in the superior colliculus, a region of the midbrain that coordinates rapid reorienting to salient signals in the environment. In this project, we are using computational models from nonhuman animal research to investigate how humans respond to looming motion:
    1. How does looming motion in videos predict people's emotional responses to those videos?
    2. How does the human superior colliculus encode looming motion?
    3. How does the human superior colliculus encode people's emotional responses to looming motion in videos?

  • Emotion Regulation in Context

    Emotion generation and regulation, such as cognitively reappraising a situation to change emotional meaning, are part and parcel of everyday life and essential to everyday functioning. But how does it actually work? How do we know what people are doing, when they are ostensibly regulating? Importantly, how can we develop a computationally explicit account of emotion regulation? One potential avenue is to look at language as a proxy for cognitive processes such as regulation with the use of deep language models. In this project we aim to determine the linguistic predictors of regulation across ontologies of different types of regulation strategies, and parse out the interaction effects of specific regulation strategies in the context of different types of emotional events, as well as mapping from deep language models to human brain activity.

  • Towards Noninvasive Control of Amygdala Activity with Deep Stimulus Generation

    Is the amygdala causally involved in the subjective experience of fear? Currently available techniques for manipulating amygdala activity are imprecise or invasive and are restricted to limited patient populations. The development of noninvasive techniques for precise control of the amygdala are needed. This project combines naturalistic neuroimaging and deep generative neural networks to create stimuli that precisely and noninvasively control activity in the amygdala to examine effects on avoidance behaviors and the subjective experience of fear.

  • Learning to Understand Emotions in Humans and Machines

    Interpreting how another person is feeling from a complex range of cues in varied contexts is a challenging problem—and yet the human brain solves it every day. Many studies of emotion perception use contextless, artificial stimuli, limiting our ability to understand this process as it happens in the real world. In this project, we explore how emotion understanding occurs in naturalistic contexts by using artificial neural networks to model different emotion signals: facial expressions, vocal tone, and language. We examine how each of these relates to the judgments humans make about others’ emotions as well as brain activity in regions known to process social emotions.