Top-Down Effects on Acoustic Encoding
Understanding spoken language requires the brain to carry out computations at multiple levels, simultaneously: Minimally, your brain must analyze sound waves, identify phonemes (the “letters” of spoken language), build those phonemes into words, and attach those words to their real-world meaning. Moreover, all of this is happening in a broader sentence-level and conversation-level context that your brain must take into account. It is well known that information flows from bottom-up (e.g. sounds > phonemes > words > meanings > broader context), however, it is less well characterized whether and how information flows from the top down (e.g. from context > sounds). Can your expectations for the sounds and words you might hear influence how your brain actually encodes incoming acoustic information? Projects in this research line ask: What kinds of higher-level information feedback to affect lower-level processing, and how long does the brain maintain these high-level predictions? The goal of this work is to better characterize how the brain balances top-down and bottom-up information.
Sound Symbolism
The sounds of spoken language have largely been assumed to be arbitrary. However, recent work indicates that this may not be true. Sound symbolism is the idea that some sounds in words broadly correlate with semantic features of the real-world objects those words identify. For example, the sound cluster gl shows up in words like gleam, glimmer, glitter, glisten, and glow, suggesting that this cluster may correlate with the semantic feature of “lightness” or “sheen”. Our current project examines how sound symbolism shows up in everyday life, with the goal of developing a new paradigm to examine listeners’ sensitivity to sound symbolism and whether sound symbolic expectations can influence acoustic encoding in the brain.
Language Learning and Bilingualism
These issues in language processing become even more complex as we consider cases of multilingualism. Indeed, to understand language as it exists in the real world, we must examine multilingualism: Current estimates are that over half of the world’s population speaks more than one language. Most work on language learning and bilingualism has focused on the acquisition of grammar and vocabulary. Thus, it remains understudied how the brain learns to dynamically use these new grammar rules and new vocabulary words, which are critical for using a new language to communicate with others. Projects in this line of work seek to characterize how spoken language skills develop when a person is learning a new language and also how a language learners’ cognitive processing compares to that of already proficient multilinguals.