PsychSci1.jpg
 

How do neural and behavioral coupling support successful communication?

A human interaction—whether it’s a conversation or a musical duet—is an intricate dance containing many dynamic, interdependent components. In order for two people to exchange information successfully, they must incorporate each other’s live feedback in real time and align their neural representations. We use highly naturalistic paradigms, cutting-edge neuroimaging techniques, and computational analysis tools to understand these processes.

In a recent study, we used functional near-infrared spectroscopy (pictured here) to simultaneously record brain activity from adult-infant dyads while they played, sang, and read a storybook. We found significant neural coupling within each dyad during direct interaction, with the infant's prefrontal cortex slightly preceding and driving similar activation in the adult brain. We also found that both brains continuously tracked the moment-to-moment fluctuations of communicative behaviors (mutual gaze, smiling, and vocal acoustics). This work extends beyond our earlier finding that adults’ brains are strongly coupled when they listen to real-life stories, by advancing our understanding of children’s influence on the accommodative behaviors of their caregivers.

Finally, what functional purpose does coupling serve? Does aligning our neural representations with others help us engage with the moment-to-moment dynamics of sounds in a way that enhances real-time learning? In two recent studies, we’ve shown that the degree of both pupil synchrony and neural synchrony between toddlers while listening to natural speech predicts their learning of new words.

 
musicgears_cropped.jpg
 

How do listeners efficiently process complex natural sounds?

We are interested in how human listeners use statistical representations to understand complex natural sounds, such as speech and music. In one study, participants heard a series of tone sequences, and they were surprisingly good at estimating the average pitch of each sequence, even though they could recall very little information about the individual tones. This suggests that they had transformed the local details into a more concise “gist”, or statistical summary representation. This phenomenon likely helps us understand tone of voice and other features of speech.

In a recent review paper, we describe how statistical summary mechanisms for efficiently processing communicative input may develop over time and may be supported by different brain regions along the neural hierarchy (i.e., across areas that have been shown to represent the statistics of speech, from syllables to entire stories).

 
TimbreFig1.jpg
 

How do we perceptually recalibrate to new auditory environments?

Adaptation recalibrates the brain's response to the current environment. We know a lot about how listeners adapt to the frequency or loudness of sounds, but how do our perceptual systems adjust to the timbre (i.e., "tone color") of natural sounds, such as the buzz of a muted trumpet or Billie Holiday's raspy voice? In one study, we report rapid, widespread perceptual adaptation to the timbre of a variety of highly natural sounds (musical instruments, speech, animal calls, natural textures) that survives pitch changes present in the natural environment. Our results suggest that timbre is a high-level, holistic property of sounds, similar to faces in vision.

 
mother+and+baby.jpg
 

How do speakers adapt their voices to meet audience demands?

In another line of work, we want to know: can we distill the unique characteristics of someone’s voice into a statistical fingerprint, and does this fingerprint change depending on whom they’re speaking to? In one study, we recorded mothers' natural speech while they interacted with their infants and with adult experimenters and measured their vocal timbre using a time-averaged summary statistic that broadly represents the spectral envelope of speech. Using a support vector machine classifier, we found that mothers consistently shift their unique vocal "fingerprint" between adult-directed speech and infant-directed speech in a way that is highly consistent across 10 diverse languages from around the world. These findings show that timbre is a pervasive, cross-linguistic property of communicative shifts and could improve speech recognition technology designed to compare infants' linguistic input across different cultural environments.

 
larisa-birta-slbOcNlWNHA-unsplash.jpg
 

How do performers’ behaviors and brains coordinate to support musical interactions?

Music, an important and universal medium for communication, offers a rich window into multiple facets of human cognition, and our research links the perception and production of music in highly novel and naturalistic ways. For example, previous work has revealed a hierarchy of brain regions that represent acoustic input at multiple timescales, but less is known about how the brain organizes information during the production of sound. In one fMRI study for which we received a GRAMMY Museum® Grant, we developed a custom, MR-safe keyboard for pianists to play in the scanner. Through this unique dataset, which grants unprecedented access to musicians’ brains over the course of learning a new piece, we are exploring questions about prediction and learning in the context of naturalistic music performance.