Katarina C. Poole: Researcher in Auditory Perception

About me

Katarina Poole

I am a Research Associate at the Dyson School of Design Engineering, Imperial College London, investigating how the brain interprets complex sensory information. My research uses spatial hearing as a framework to study predictive brain mechanisms, combining psychophysics, electrophysiology, and perceptual modelling.

Currently, I lead research within the SONICOM and ANTHEA projects, collaborating with partners in the hearing technology industry to translate fundamental neuroscience into real-world applications. My trajectory spans from studying circuit-level neural encoding in the auditory cortex and hippocampus (PhD, UCL) to immersive audio experiments in virtual reality at Imperial.


Research Themes

1. Predictive Coding in frequency and spatial domains

My foundational work investigated how the brain extracts high-level features, such as statistical regularities, from acoustic input. I am now evolving this research to move beyond simple frequency-based models and into the 360° spatial domain. Here I aim to investigate how the brain uses prior spatial knowledge to navigate complex scenes, aiming to uncover how the “predictive brain” operates in the ecologically rich and high dimensional environments of everyday life.

2. Characterising Spatial Hearing in Hearing Impairment

As part of the ANTHEA Project, I use spatialised change-detection tasks to probe how listeners, both normal-hearing and hearing-impaired, parse background acoustic scenes.

While change detection is a well-studied phenomenon in simple auditory contexts, my research is among the first to examine these mechanisms across the full 360° spatial field. This work aims to:

  • Uncover Fundamental Mechanisms: Understand how the brain automatically monitors the “spatial background” and identifies new acoustic events in high-resolution, three-dimensional space.
  • Inform Device Design: Provide evidence-based metrics to enhance hearing aid processing and “non-target” sound preservation.
  • Diagnostic Innovation: Develop new ways to identify individuals with poor spatial situational awareness, even when standard audiograms appear normal.

3. Validation of Immersive Audio Technology

Through the SONICOM Project, I lead the evaluation of synthetic Head-Related Transfer Functions (HRTFs). Rather than just focusing on the synthesis itself, I have developed a robust battery of tests to ensure these technologies provide a high-fidelity match to the human auditory system. My assessment framework includes:

  • Numerical Metrics: Objective benchmarks of spatial audio quality using the Spatial Audio Metrics (SAM) Toolbox.
  • Perceptual Testing: VR-based localisation tasks and “Spatial Release from Masking” tests using speech-in-noise stimuli.
  • Physiological Efficacy: Using EEG and pupillometry as objective measures of listening effort and neural engagement with spatialised sound.

Methodology & Expertise

My research employs a multi-scale approach to auditory neuroscience, spanning from cellular circuits to human behavior.

  • Human Neuroimaging & Physiology: High-density EEG, Pupillometry (as a metric of cognitive load/effort), and Microsaccades.
  • Immersive Technologies: Virtual Reality (Unity), gaze-tracking, and individualized spatial audio (HRTF) validation.
  • Systems Neuroscience (In Vivo): Chronic extracellular electrophysiology (single-unit and LFP) and optogenetic manipulation of cortical and hippocampal circuits.
  • Computational Modeling: Bayesian observer models, statistical regularity detection models (e.g., D-REX), and development of numerical audio metrics.
  • Experimental Design: Psychophysics (human and animal), speech-in-noise testing, and spatial release from masking.
In vivo Electrophysiology
EEG
Pupillometry and Modeling

Open Research & Resources

I am a strong advocate for Open Science and have developed several tools used by the auditory research community: