Professor John Culling BSc DPhil

Professor John Culling

BSc DPhil

School of Psychology

Email:
cullingj@cardiff.ac.uk
Telephone:
+44 (0)29 2087 4556
Location:
7.05, Tower Building, 70 Park Place, Cardiff, CF10 3AT
Media commentator
Available for postgraduate supervision

I am an expert in psychocoustics, binaural hearing and speech perception in noise.

Research summary

Listeners are highly proficient at detecting and identifying sounds, especially speech sounds in background noise. This ability is remarkable, because the waveform of the attended voice may be quite swamped by those of competing voices at the two ears. It also has important practical ramifications, since hearing impaired listeners often find a single voice intelligible when amplified, but find any interfering sound intolerable.

In very noisy environments normally-hearing listeners will also struggle, especially in reverberant rooms. By investigating the perceptual mechanisms which underlie these effects, I hope to uncover principles which could guide the design of hearing-aids, cochlear implants and, indeed, rooms so that they facilitate rather than impede communication in noise.

Undergraduate education

I did my undergraduate degree in Experimental Psychology at Sussex. Following graduation I worked at the GEC Hirst Research Centre in Wembley (now defunct along with GEC itself) at the MRC Applied Psychology Unit (now the MRC Cognitive and Brain Sciences Unit) and the Speech Laboratory of the Cambridge University Engineering Dept.

Postgraduate education

I returned to Sussex as a research student under the supervision of Prof. Chris Darwin. My doctoral thesis “The perception of double vowels” mainly concerned the effect of differences in fundamental frequency on listeners’ ability to perceptually separate concurrent speech.

Postgraduate employment

Before my appointment at Cardiff, I worked at the MRC Institute of Hearing Research with Prof. Quentin Summerfield, the Oxford Physiology Laboratory with Prof David Moore and the Boston University Dept. Biomedical Engineering with Prof. Steve Colburn.

Honours and awards

  • Fellow of the Hanse Wissenschaftskolleg
  • Fellow of the Acoustical Society of America.

Professional memberships

  • Acoustical Society of America
  • American Auditory Society
  • Experimental Psychology Society.

Academic positions

  • 1991-1995 Short-term, non-clinical scientist, MRC Institute of Hearing Research
  • 1995-1998 MRC Research Fellow, Dept. Physiology, Oxford University and Boston University
  • 1998-2003 Lecturer, School of Psychology, Cardiff University
  • 2003-2006 Senior Lecturer, School of Psychology, Cardiff University
  • 2006-2009 Reader, School of Psychology, Cardiff University
  • 2009- Professor, School of Psychology, Cardiff University.

Speaking engagements

  • Newspaper articles (Western Mail)
  • New Scientist
  • BBC TV and Radio
  • CBC Radio.

2018

2017

2016

2015

2014

2013

2012

2011

2010

2009

2008

2007

2006

2005

2004

2003

2001

2000

1999

1998

1997

1996

1995

1994

1993

1992

1990

I teach Hearing within the second-year module on Attention, Peception and Action (PS2021). The lectures focus on the main functions of the human auditory system, to detect, identify and localise sounds in the environment, but the lectures contextualise these functions by discussing the abilities of other animals and the effects of hearing impairment.

I also supervise level 2 practicals in perception, focussing on the parameters of the human voice, their sexual differentiation and the potential influence of these parameters on vocal attractiveness.

Research topics and related papers

My research focuses on the cocktail-party problem, which concerns how listeners are able to cope with high levels of interfering noise when listening to speech. Typically, such interfering noise may consist of many other competing voices, as at a cocktail party or a busy restaurant. Humans (and other animals) remain far more proficient at this task than any automatic system. They are known to use many different mechanisms, all of which fall within my research interests, but most of my research concerns binaural hearing.

Fundamental Frequency.

Much of my early work was concerned with differences in fundamental frequency (voice pitch): competing voices are more easily understood is their fundamental frequencies differ. I found that this effect operates principally in the first formant region (Culling and Darwin, 1993), and that DF0s produce multiple perceptual cues, including amplitude modulations (Culling and Darwin, 1994). Differences in modulation of fundamental frequency were only useful when that modulation introduced a difference in F0 (Darwin and Culling, 1990).

Binaural Hearing

Through the possession of two ears listeners are able to exploit differences in sound source location. The effect involves interaural level differences (ILDs) and interaural time differences (ITDs), the cues used for left/right sound localisation (Culling et al., 2004). However, distinct perceived locations for target and interfering sound are not necessary or sufficient for listeners to perceptually separate competing sounds. Culling and Summerfield (1995) found that listeners were unable to perceptually segregate a whispered vowel, represented by two noise bands with a common ITD, from two concurrent (but spectrally distinct) noise bands with a different ITD. Edmonds and Culling (2005a) showed that speech need not have a consistent ITD in different frequency bands for listeners to perceptually segregate it from competing speech or noise, and that ITDs and ILDs need not even agree regarding source direction to get the full benefit of both cues (Edmonds and Culling 2005b).

Rather than relying on sound-source localisation, the binaural system seems to use a separate process, known as binaural unmasking. It has long been known that a signal can be detected or identified at a lower signal-to-noise ratio (SNR) if its interaural timing or phase differs from that of a masker. A similar effect is observed for the intelligibility of speech signals. One framework used to explain binaural unmasking is interaural cancellation, which was originally developed by Nat Durlach  but which has lately been adapted for use with broadband signal such as speech (Culling and Summerfield, 1995; Culling et al. 2004; Lavandier and Culling, 2010). In this theory, the binaural system uses internally generated delays to compensate for the external delay of the masker and then subtracts the stimulus at one ear from that at the other. The predictions of this equalisation-cancellation (E-C) theory are almost indistinguishable from those derived using theories based on correlation, especially in their predictions of detection thresholds. However, Culling (2007) discovered that the two theories give increasingly divergent predictions at supra-threshold signal levels necessary for the more demanding task of speech understanding. Instead of using the typical task of requiring listeners to detect a signal in noise, Culling gave listeners a loudness discrimination task. The results were entirely consistent with E-C theory.

Binaural unmasking can also be used to explain a family of illusions known as dichotic pitches, which are generated purely by the interaural phase relationships (Culling et al., 1998a,b; Culling, 1999; Culling, 2000a,b).

Architectural Acoustics

Our ability to understand speech in background noise can be impaired by room reverberation. Culling et al. (1994) employed virtual simulations rooms to measure the effect of reverberation upon spatial unmasking and upon the benefit of differences in fundamental frequency. Culling et al. presented synthesized ‘target’ vowel sounds against ‘masker’ vowels or pink noise, and listeners were required to identify the target vowels. In anechoic conditions, spatial separation of target and masker resulted in improved vowel-identification thresholds compared to when they were co-located, but in reverberation, this unmasking effect was abolished. Lavandier and Culling (2007, 2008) showed that the effect of reverberation on binaural unmasking is mediated by reduced interaural coherence of the masker. Reverberation also affected the intrinsic intelligibility of the target, but this effect occurred only at higher levels of reverberation.

Hearing impairment

Recent work has applied understanding of speech intelligibilityn in noise to the problems faced by hearing impaired people. This work has concentrated on optimising the use of binaural hearing by listeners with hearing aids, cochlear implants and bone-anchored hearing aids.

Funding

  • 2018 Leverhulme Trust (£239K) “Active audiovisual perception: Listening and looking while moving” Co-investigator with Prof. Tom Freeman.

  • 2017 EPSRC project grant (£366K) “Physiologically inspired simulation of sensorineural hearing loss” Principal Investigator.

  • 2015 MRC/EPSRC network grant (£146K) “Novel applications of microphone technologies to hearing aids.” Principal Investigator.

  • 2014 Oticon Foundation (£139K) “Potential benefits of channel interlacing in bilateral cochlear implants.” Principal Investigator.

Research collaborators

  • Dr Mathieu Lavandier (ENTPE, Université de Lyon).
  • Mr Steven Backhouse (Princess of Wales Hosp. Bridgend)
  • Dr Robert Mcleod (Princess of Wales Hosp. Bridgend)
  • Mr Barry Bardsely (Swansea University)

Postgraduate research interests

My current research interests focus around the cocktail-party problem and binaural hearing. Specific interests include binaural unmasking, dichotic pitches, speech perception in noise, perceptual segregation by differences in F0, dip listening, temporal and spectral resolution of the binaural system, the effects of room reverberation, simulations of hearing impairment and cochlear-implant use.

If you are interested in applying for a PhD, or for further information regarding my postgraduate research, please contact me directly, or submit a formal application.

Current students

Barry Bardsley is researching the problems with binaural hearing experienced by hearing impaired listeners.

Lewis Ablett is researching methods for improving the stereo separation provided by a pair of bone-anchored hearing aids.

Past projects

Barrie Edmonds graduated with a Ph.D. in 2003. He worked on the role of across-freqeuncy processes in binaural segregation of speech in noise and against competing speech. He has since had post-doctoral positions with myself and with Dr. Katrin Krumbholz at the MRC Institute of Hearing Research. He subsequently worked as Lead Scientist at the National Biomedical Research Unit in Hearing.

Andrew Kolarik graduated with a Ph.D. in 2006. He worked on the temporal and spectral resolution of binaural unmasking. He has since had post-doctoral positions with myself and Dr Tom Freeman at Cardiff, then at Anglia Ruskin University, Cambridge University and now at the School of Advanced Study at the University of London.

Christine Binns graduated with a Ph.D. in 2007. She worked on the role of intonation contours in speech perception in noise. She is now head of mathematics at a secondary school.

Mickael Deroche graduated with a Ph.D. in 2010. He worked on the effect of differences in F0 between competing voices on their intelligibility and the influence of room reverberation on this effect. He has since post-doctoral position with Dr. Monita Chaterjee at the University of Maryland and latterly at the University of Montreal.

Sam Jelfs graduated with a Ph.D in 2011. Collaborating with the Welsh School of Architecture, he worked on modelling of speech perception in rooms. He has since been working at Philips Research.

Jacques Grange graduated with a Ph.D. in 2015. He worked on the effec of head orientation on speech intelligibility in background noise. He has since worked as a post-doc with myself, principally on the acoustic simulation of cochlear implants and of hearing impairment itself.

Rob Mcleod graduated with a Ph.D. in 2017. He worked on the improvement of stereo separation in bilateral bone-anchored hearing aids. He has since returned to clinical practice as an E.N.T surgeon, but continues his research part-time.