Speech
I worked on the "cocktail party problem", the problem of extracting a sound of interest from a background of multiple sounds. We encounter this problem on a daily basis, for instance, while speaking to friends in a crowded bar or listening to an orchestra. A lot of work has been done since Colin Cherry's initial description of the phenomenon in the early 1950s, but we still do not fully understand how the brain filters out irrelevant sounds whilst focusing on meaningful sounds.
During my PhD, I investigated this problem using synthetic stimuli that capture the complex nature of real-world sounds. The spectrogram of this "Stochastic Figure-Ground" stimulus is shown above. It is a broadband stimulus whose frequency components vary randomly in time but a certain percentage of these frequencies (marked by arrows) become coherent for a brief duration of time (depicted by the rectangular region) such that the coherent components "pop-out" of the background noise.
My results showed that:
My results showed that:
- Human listeners can quickly and robustly detect such "figures" from the "ground".
- The temporal coherence between the channels in the "figure" is an important cue for segregation (based on modelling).
- The brain is highly sensitive to the emergence of such "figures", even in the absence of attention (using MEG).
- Areas outside the auditory cortex, like the intraparietal sulcus (see activation below) mediate segregation (using fMRI).
I have also worked with app developers and colleagues from the Wellcome Trust Centre for Neuroimaging, UCL to present this experiment as a "game". You can play the game - "How well you hear sounds", along with several other neuroscience experiments available on the "Great Brain Experiment" app.
Publications
1. Teki S, Chait M, Kumar S, von Kriegstein K, Griffiths TD (2011)
Brain bases for auditory stimulus-driven, figure-ground segregation.
Journal of Neuroscience 31(1): 164-171.
2. Teki S, Chait M, Kumar S, Shamma S, Griffiths TD (2013)
Segregation of complex acoustic scenes based on temporal coherence.
eLife 2: e00699.
3. Teki S, Kumar S, Griffiths TD (2016)
Large-scale analysis of auditory segregation behavior crowdsourced via a smartphone app.
PLoS One 11(4): e0153916
4. Teki S, Barascud N, Picard S, Payne C, Griffiths TD, Chait M (2016)
Neural correlates of auditory figure-ground segregation based on temporal coherence.
Cerebral Cortex 26(9):3669-80
Brain bases for auditory stimulus-driven, figure-ground segregation.
Journal of Neuroscience 31(1): 164-171.
2. Teki S, Chait M, Kumar S, Shamma S, Griffiths TD (2013)
Segregation of complex acoustic scenes based on temporal coherence.
eLife 2: e00699.
3. Teki S, Kumar S, Griffiths TD (2016)
Large-scale analysis of auditory segregation behavior crowdsourced via a smartphone app.
PLoS One 11(4): e0153916
4. Teki S, Barascud N, Picard S, Payne C, Griffiths TD, Chait M (2016)
Neural correlates of auditory figure-ground segregation based on temporal coherence.
Cerebral Cortex 26(9):3669-80