Assistant Professor of Psychology in Psychiatry
It seems clear that there is a strong relationship between how old you are when you start learning a language and how native-like your perception of the sounds of that language is, but the precise nature of this relationship is not well understood.
I am conducting a wide range of neuroimaging and behavioral experiments (together with Bruce McCandliss) to examine how speech perception in a second language depends on when you learn it, how similar its sounds are to your native language, and how all of this is reflected in the activity of particular brain regions that are specialized for speech.
A second stream of my research involves using computational models of word recognition to study how people translate spelling into sound. A central issue in this research was whether people do this using categorical processes that operate on discrete, local representations or more probabilistic processes that operate over graded, distributed representations. In collaboration with Shu Hua's group at Beijing Normal University, we are extending parallel distributed processing models of reading to Chinese, which has very different statistics in its spelling-to-sound and spelling-to-meaning mappings than English.
It turns out that similar questions can be addressed (with similar techniques) in studying patterns of neural activity in the brain. Something I've been excited about lately is using neural-network models to do multivariate analyses of brain responses to different speech sounds.