
Kathleen F. Faulkner* and David B. Pisoni
*Correspondence: Kathleen F. Faulkner katieff@indiana.edu
Author Affliations
Department of Psychological and Brain Sciences, Indiana University 1101 E. 10th Street, Bloomington, USA.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
At the present time, cochlear implantation is the only available medical intervention for patients with profound hearing loss and is considered the "standard of care" for both prelingually deaf infants and post-lingually deaf adults. It has been suggested recently that cochlear implants are one of the greatest accomplishments of auditory neuroscience. Despite the enormous success of cochlear implantation for the treatment of profound deafness, especially in young prelingually deaf children, several pressing unresolved clinical issues have emerged that are at the forefront of current research efforts in the field. In this commentary we briefly review how a cochlear implant works and then discuss five of the most critical clinical and basic research issues: (1) individual differences in outcome and benefit, (2) speech perception in noise, (3) music perception, (4) neuroplasticity and perceptual learning, and (5) binaural hearing.
Keywords: Cochlear implants, auditory prostheses, deafness, signal processing
Cochlear implantation (CI) is currently the only FDA-approved medical treatment available to partially restore hearing in patients with severe-to-profound sensorineural hearing loss. CIs are often cited as one of the greatest accomplishments of auditory neuroscience, an example of truly translational science linking basic research on hearing to clinical application [1,2]. CIs were first approved by the FDA for adults in 1985, and then for children in 1990. As of 2011, the NIH reported that 219,000 patients had received CIs worldwide, with 42,600 adults and 28,400 children implanted in the United States [3]. This number is increasing rapidly; by 2013, the number of CIs is estimated at more than 320,000, with almost 40,000 recipients implanted bilaterally [4].
CI candidates are patients who receive little benefit from hearing aids. Rather than just amplifying an acoustic signal like conventional hearing aids do, CIs convert sound patterns into electrical signals which are then delivered directly to the spiral ganglion cells and the auditory nerve (See Figure 1. for an illustration of a cochlear implant). The signal processing carried out by a CI is complex: the range of incoming amplitude levels is first compressed resulting in a reduced dynamic range, and the bandwidths of component frequencies are greatly reduced. Depending on placement within the cochlea, a tonotopic mismatch in frequency-to-place alignment may occur [5].
Figure 1 : Illustration of a cochlear implant.
In addition, numerous device- and patient-related factors often limit how frequency and intensity information in complex sounds like speech and music are encoded and used by patients. The age and criterion for CI candidacy have also changed dramatically over time. CI candidacy is currently determined by the degree of hearing loss (severe-to-profound bilateral sensorineural hearing loss ≥ 70dB) and limited benefit from binaural hearing aids following a 3-6 month trial with appropriately fitted current technology.
Benefits for children and adults with CIs are routinely assessed with tests of spoken word recognition and sentence perception in the quiet [6]. For prelingually deaf children, outcome measures also include assessment of receptive and expressive language development and speech intelligibility [e.g., 6,7]. Performance with CIs has consistently improved over time due to developments in signal-processing technology and changes in candidacy criteria (see Figure 2) [8]. However, large individual differences in performance are universally reported, from "excellent" high-functioning CI users, often called "Stars," who do exceptionally well on open-set speech perception tasks to low-functioning CI users who often receive little or no benefit beyond basic awareness of sound and assistance with lipreading [9,10]. Individual differences in outcome and benefit result from a combination of sources including the degraded signal provided at the peripheral level, how robustly the information has been encoded through the central auditory pathways, and how well an individual user can "match" the new transformed acoustic signal to neural and cognitive representations in their long-term memory. All behavioral measures used to assess outcome and benefit are the final product of a long series of complex information processing operations [11].
Figure 2 : Cochlear implant outcomes over time.
Counseling hearing-impaired listeners is a very important component of the candidacy evaluation process to ensure that patients have realistic expectations about the range of possible outcomes following implantation. Audiologists inform all patients that most CI recipients will generally experience better sound awareness, assistance with lipreading, and the ability to hear environmental sounds. Patients are also counseled about the range and level of performance to expect based on their individual demographics, hearing history, and other risk factors. For most CI users, there is an initial period of perceptual learning and adaptation to the new transformed sounds, and it often takes several months before any speech and language benefits from the CI are experienced [e.g., 12]. The media has played a significant role in inflating patient expectations by providing exaggerated and often incorrect descriptions of CIs. For example, in Natural-Born Cyborgs, a book about the role of technology in current human experience, the philosopher and cognitive scientist Andy Clark writes that"fitted with a cochlear implant that cures my deafness and, as kind of added extra, allows me to hear sounds in ranges that most adult humans cannot detect, my core sense of my own auditory potential again changes [13]." There is no scientific evidence that CIs "cure" deafness or improve human hearing beyond the normal range. If anything, CIs deliver a highly degraded, underspecified, and atypical compromised signal to the brain in terms of the range of frequencies and intensities when compared with normal hearing [8]. With information like this in the media, it is not surprising that many profoundly deaf patients seek out CIs with unrealistic expectations about potential outcomes and benefits. CIs do not "restore" normal hearing and, like most medical treatments, they do not provide benefit for all patients who receive this medical intervention.
Despite the enormous success of cochlear implantation for the treatment of profound deafness, especially in young prelingually deaf children, several pressing unresolved clinical issues have emerged that are at the forefront of current research efforts in the field [14]. Below we summarize five of the most critical problems: (1) individual differences in outcome and benefit, (2) speech perception in noise, (3) music perception, (4) neuroplasticity and perceptual learning, and (5) binaural hearing.
Individual differences in outcome and benefit
Many deaf adults and children do very well with their CIs, often approaching the performance of age-matched normal hearing peers under quiet testing conditions, while other patients with CIs obtain little benefit, scoring less than 50% correct on word recognition tests [e.g., 7,9,12,15]. Benefit with a CI is routinely assessed with speech/language measures and quality of life assessments. All patients with CIs experience a great deal of difficulty understanding speech in background noise or under high cognitive loads. When compared to other medical interventions, there is universal agreement among speech and hearing scientists that variability in speech and language outcomes is the most important and most challenging unresolved problem in the field today [e.g., 15-18].
The restoration of hearing function in most post-lingually deaf adults and children, as well as prelingually deaf infants and young children, is remarkable and exceeds the performance of other neural prostheses that have been developed for other sensory modalities [1,8,19]. There is no question that CIs work and work well in many patients who are candidates for this kind of medical intervention. However, CIs do not work equally well in all patients and this is a significant clinical and pressing problem.
For example, early research with CIs focused on the "Stars"— the extraordinary good CI users—as a "proof of principle" to demonstrate "efficacy" that CIs work. More recently, research efforts have shifted to the investigation of the "effectiveness" of CIs, focusing on the "poor" or low-functioning CI users who fail to achieve the expected benefits from their CIs [e.g., 15,20-22]. The enormous variability in speech and language outcomes is not surprising given the heterogeneity of hearing loss. Each patient is unique and presents with a different developmental history and genetic profile that may contribute to their prognosis. It is unlikely that any one factor can successfully predict speech and language outcomes in all CI patients, because the observed variance reflects complex multi-parametric interactions distributed across many domains. Strong predictors (or "risk factors") are historically tied to: (1) the patient (e.g., demographics: age, age at implantation, degree of deafness, duration of deafness, hearing aid use, residual hearing), (2) the environment (e.g., access to early intervention, SES, communication mode), and (3) the device (e.g., generation of implant, surgical technique, active channels, dynamic range). While these strong predictors provide an initial foundation for predicting potential outcomes for the majority of CI users, a substantial amount of variance still remains unexplained [e.g., 5,23-25]. The unexplained variability exists in domains that have not been previously explored by conventional clinical assessments. All medical interventions have variability, and have well-known risk factors associated with them. In the case of CIs, predicting outcome and benefit remains a significant challenge in the field of Neuro-otology.
Measuring outcomes in adults and children
Outcomes are routinely measured with a battery of behavioral tests. For adults, basic hearing thresholds for tones and word and sentence recognition scores are the primary assessment measures. Many CI patients achieve very high levels of speech perception in the quiet, often reaching ceiling levels of performance with conventional testing materials [16]. These improvements are a result of a combination of factors, including improved cochlear implant technology, better signal processing algorithms, and changes in candidacy criteria (i.e., implanting patients with less severe hearing losses) (see Figure 2). Early assessments of CI performance included closed-set testing materials whereby the listener is provided with a short list of possible response alternatives, as well as open-set word and sentence recognition tests. While many word and sentence test batteries are available for use with CI patients all over the world [6], a standard test protocol called the Minimum Speech Test Battery (MSTB) was adopted in 1996 by a committee of auditory scientists, clinicians, and cochlear implant manufacturers to provide a comprehensive set of standardized speech measures for use in comparing results across implant centers and clinics [26,27]. This battery has recently been updated to reflect changes in the performance of current CI users, including more challenging sentence recognition materials presented in quiet and in multi-talker babble noise [28,29]. These assessment instruments provide a greater range of variability allowing for the measurement of changes in performance over time, especially for higher performing patients [16]. Measures of self-assessed quality of life also provide another way to assess benefit following cochlear implantation [30-35].
For children, test materials used in evaluating outcome and benefit are based on the age and ability of the child, and assess the development of expressive and receptive language and perceptual abilities, through behavioral observation, testing, and caregiver reports [36]. The development of speech perception is typically evaluated in a hierarchical fashion [37], from basic detection of sounds to spoken word recognition, where children "graduate" to more difficult testing materials as they move along the developmental trajectory [38-40]. The skill level of the child determines the chosen test materials, from assessment for candidacy through tracking performance over time following implantation and monitoring the success of speech and language intervention strategies. For very young children, this involves monitoring several stages of sound/speech perception (e.g., Infant-Toddler Meaningful Auditory Integration Scale: IT-MAIS [41]).
Pre- versus post-lingually deafened adults
For adult patients seeking a CI, candidates whose deafness occurred after the acquisition of speech and language typically outperform adult patients with congenital or early-acquired deafness. The benefits from CIs in prelingually deaf adults who have been deaf for many years is quite poor due to the long period of auditory deprivation, delay in spoken language development, and substantial neural reorganization of underlying cortical brain circuits [42]. Most Neuro-otologists will not implant deaf adult patients who have long-term prelingual profound deafness because restoration of hearing and speech and language outcomes has been shown to be poor and CIs provide little benefit even with long-term use [43-46]. Moreover, even in post-lingually deafened patients who are deemed to be acceptable candidates for CIs, the individual differences in speech and language outcomes are enormous and remain largely unexplained by conventional demographic, medical, and device factors.
Prelingually deaf children
Prelingually deaf infants and children are a fundamentally different clinical population because their hearing loss occurs during the process of speech and language development. A sensory deficit occurring during this critical period of neural and cognitive development has profound effects on later speech, language, and cognition [14,47,48]. It is generally believed that children implanted under two years of age will have the best speech and language outcomes [e.g., 7,23,49-54]. Many speech and hearing scientists believe that early implantation maximizes the critical period and takes advantage of the neural plasticity in younger populations [2]. Some researchers have even suggested that early-implanted children will "catch-up" to their normal-hearing peers [55]. Unfortunately, no single factor has been found that is necessary and sufficient to reliably predict speech and language outcomes in all patients.
Speech in noise
Understanding speech in noise is the most frequent and challenging problem facing patients with CIs, and is a central topic of current research. It takes a whole brain to understand speech, which is especially true when listeners attempt to understand speech in a background of other competing voices [56]. Outcomes and benefits following cochlear implantation are typically measured in the clinic or laboratory using word and sentence materials presented in the quiet. The results of these tests provide important baseline information about how a patient performs under optimal conditions, but they do not reflect common everyday listening environments. To perform well in real-world noisy conditions, listeners must quickly adapt, switch their attention, and adjust to multiple sources of variability in both the signal and listening environments. Sentence recognition tests in noise are useful for assessing speech understanding abilities because they require a combination of basic sensory and perceptual abilities as well as elementary neurocognitive resources and processing operations [e.g., 11]. All CI patients routinely experience difficulty in listening to speech in the presence of background noise or over the telephone, and they have an even more difficult time listening to speech in fluctuating noise, such as multi-talker babble in meetings and restaurants where many people are talking at the same time [18,57,58].
At the peripheral auditory level, hearing scientists believe that the difficulties CI patients experience arise primarily from poor spectral resolution due to channel interaction across stimulated electrodes [e.g., 59,60]. Much of the current research efforts to improve speech in noise have targeted the"front-end" or early sensory processing and neural encoding of speech sounds using novel coding strategies and noise reduction algorithms [61-64]. Another avenue has been to identify CI channels with the best electrode-neural-interface and either simply deactivate the problematic channels in the patient's speech processor map, or reduce the degree of channel interactions among electrodes by using signal processing strategies employing focused current [17,65-67].
Some researchers have now begun to assess the role of attention and cognition in individuals using a CI. For example, performance on a speech in noise task can be measured by determining the signal to noise ratio (SNR) needed for a patient to achieve 50% correct on a word identification in noise task. However, this score may not accurately reflect the amount of cognitive effort that a patient has to expend to reach this criterion. The concept of cognitive load, that some tasks require more mental effort and processing resources, is not a new concept [68,69] and reflects a core principle that the cognitive system has finite processing resources. When one task becomes more difficult, in this case trying to understand speech that is both degraded and mixed with competing noise, additional effort is necessary to maintain performance. Neurocognitive measures have been employed recently to assess the information processing workload demands required for listening to speech in background noise for CI listeners [70,71] and normal hearing participants under CI simulation [72]. Some of these behavioral measures rely on self-report [73], speed of processing [74,75], physiological responses such as pupillometry [77], or "dual-task" methodologies to assess listening effort and mental workload [78-79].
Music perception
Many adults with CIs are very disappointed that they can no longer hear and appreciate music after receiving their CI. Impairments in music perception are often reported as significant negative factors in self-reports of quality of life [31]. Many postlingual CI patients show poor music perception on behavioral tests and uniformly report decreased enjoyment in listening to music following CI [80-83]. Recently, Limb and Roy [84] have identified several technological, sensory, and acoustical constraints that limit CI users from perceiving music through a CI. These include poor representation of the spectral-temporal fine structure of music, as well as the long-term neurobiological effects of auditory deprivation. Recognizing speech in noise and listening to music rely heavily on robust encoding of spectral information in complex time-varying signals. Normal hearing listeners tested under CI acoustic simulations have shown that only 4-8 channels of information are required for speech recognition in the quiet [85,86]; Additional channels are required for speech recognition in background noise, and often more than 48 channels are needed for music recognition [85-87].
Current research efforts exploring novel signal processing schemes to improve the encoding of the temporal fine structure of complex time-varying acoustic signals may provide additional information and lead to significant improvements in music perception as well as speech perception in noise [e.g., 88]. Further, targeted individualized auditory training using complex spectral and temporal patterns may lead to additional improvements in music perception [89-93]. Despite the availability of music perception tasks for clinical use, music perception is not routinely assessed clinically by audiologists who are primarily interested in speech recognition outcomes [89,90,94,95].
Neuroplasticity and perceptual learning
There are profound peripheral and central effects in the auditory pathway as a result of sensory deprivation and stimulation with a CI [see 1,19,48,96]. The fact that many prelingually deaf children and post-lingual deaf adults obtain significant benefit with a CI serves as an existence proof of neural plasticity following implantation. However, the structural integrity of the central auditory system may significantly limit the capacity for change in some individuals, either due to the duration of deafness resulting in profound neural degeneration (local or central) or significant cortical neural reorganization and cross-modal interference [1,2,48]. Children implanted younger than two years of age may have an advantage for neural plasticity, as demonstrated through cortically evoked potentials, while older children may have diminished neural plasticity [97,98].
All medical prostheses require extensive rehabilitation. In contrast, an adult patient who receives his or her CI is routinely sent home to experience their auditory world without any structured rehabilitation program in place. The lack of evidence-based auditory training methodologies combined with the absence of insurance reimbursement for clinical audiologists to manage intervention protocols may be to blame. Patients may continue to improve their performance for up to one year or more following CI [12]; however it may be possible to enhance the process of adapting to the implant with individualized focused training and other perceptual learning techniques [e.g., 99-101]. Several recent studies have explored auditory training techniques with cochlear implant patients; unfortunately, success has been mixed [102,103]. One problem with current auditory training studies has been their focus on developing training protocols that are related to the patient's clinical difficulties rather than on understanding the causal factors underlying the deficits that lead to the observed performance decrements. All auditory training studies have consistently reported difficulties in demonstrating robust transfer of training. If a CI patient improves on a training protocol in the clinic, lab, or even at home, there is little evidence that this training will also improve the patient's ability to perceive speech or communicate in everyday real-world conditions in daily life. Current research efforts are now focused on identifying the core fundamental components of auditory training and perceptual learning that will produce robust improvements and transfer effects [104].
Binaural hearing
Hearing with two ears provides several significant benefits, such as locating a sound source (localization), improved speechin- noise performance, and dereverberation of competing acoustic signals in the real-world [105]. These binaural hearing advantages arise from two primary cues—sounds arrive sooner (interaural timing difference, ITD) and at a greater intensity (interaural level difference, ILD) to the ear that is closer to the sound source. Unilateral CI users do not have access to either of these two robust binaural hearing cues. Restoring binaural hearing with two CIs could lead to significant improvements such as speech perception in noise, sound source localization, and attention/inhibition of reverberation in the environment [106-109]. However, current bilateral CI users may not be able to make use of these interaural timing and intensity cues because current clinical strategies and mapping techniques do not coordinate the inputs from the two separate CIs like the binaural auditory system does in normal hearing listeners. Further, there is significant mismatched alignment of the electrodes (insertion depth), and the corresponding channels are often not balanced for loudness [110,111]. Additionally, deprivation-related disturbances and neural reorganization in binaural circuitry and auditory cortex may limit the capacity of the damaged auditory system to recover and utilize these binaural time and intensity cues [112]. Despite these challenges, bilateral cochlear implantation is becoming more common, with many prelingually deaf infants and young children now undergoing simultaneous bilateral cochlear implantation [113,114]. However, many bilateral CI patients often receive their second CI sequentially often after long time intervals between surgeries. This is not an optimal strategy because there is substantial evidence documenting a narrow sensitive period for the development of binaural hearing [115-121], although the binaural system shows some degree of plasticity into adulthood and sensitivity to these cues may improve with experience [122-124]. A common misconception in the media is that "if one CI is good, then two must be better." However, this is not always the case and, in some cases, a second CI offers no demonstrable benefit over one [e.g., 120,125,126]. Binaural hearing in CI users can be achieved through the use of two CIs (bilateral) or through a combination of CIs and hearing aids across the ears (bimodal or electroacoustic). Bimodal listeners are those patients who have some residual low-frequency hearing and wear a hearing aid on their contralateral ear. The choice of a second CI is controversial in these patients because they may be better able to access some useful binaural hearing cues via their remaining lowfrequency acoustic hearing [127].
The field of CIs has many pressing unresolved clinically-relevant questions that would benefit substantially from a broader neurobiological perspective, particularly as the number of CI patients increases and the criteria for candidacy continues to evolve. Individual differences in outcome and benefit remain a significant focus of current research to understand additional sources of variability beyond the strong demographic predictors. Incorporating more ecologically valid measures of speech in noise, music perception, listening effort, and quality of life may provide valuable assessments of benefit beyond speech perception in the quiet in evaluating gains following auditory training, or from the use of novel CI processing strategies. Finally, while bilateral CIs appear to provide additional benefit for many patients, particularly those who receive their implants simultaneously at a young age, more research is necessary to determine how best to provide listeners with access to binaural hearing cues.
The ear does not work in isolation from the rest of the brain; it is an inseparable component of a complex highly organized information processing system [128]. The contribution of neurocognition to hearing is a new and growing direction of research for people with hearing loss [129]. This approach to hearing and speech perception adopts a "systems approach" linking early auditory processing to higher-level neurocognitive functions such as attention, cognitive control, executive function, inhibition, learning, and memory. Research on CIs is now becoming situated within a broader theoretical and conceptual framework of neurocognitive and human information processing reflecting a foundational assumption that the brain and nervous system work together in an integrative and connected fashion [14]. By viewing the brain as a processor of information that maps sensory inputs to perception, cognition, and action, new and better ways to assess outcomes and benefits and develop novel intervention strategies become available.
When viewed in this broader theoretical context, we suggest that combining efforts among numerous diverse research groups, individualizing mapping based on patient-specific factors, using highly advanced signal processing, and promoting robust perceptual learning through novel auditory training techniques will prove to be the most successful approach to helping low-functioning CI patients reach optimal levels of speech and language performance. More importantly, employing a system-level neurobiological approach to these complex problems will help to identify the core foundational underlying sensory, neural, and cognitive processing mechanisms and factors responsible for the enormous individual differences in outcome and benefit following CI. We are confident that the clinical benefits of CIs will be improved significantly once we know and fully understand the foundational mechanisms of action responsible for the variability in speech and language outcomes, especially in the patients who are performing poorly with their CIs.
The authors declare that they have no competing interests.
Authors' contributions | KFF | DBP |
Research concept and design | √ | √ |
Collection and/or assembly of data | -- | -- |
Data analysis and interpretation | -- | -- |
Writing the article | √ | √ |
Critical revision of the article | √ | √ |
Final approval of article | √ | √ |
Statistical analysis | -- | -- |
The research of the authors has been supported, in part, by NIDCD grants R01 DC-00111, R01 DC-009581, and T32 DC-00012. We thank the editor and two anonymous reviewers for their helpful comments.
EIC: Tadanori Tomita, Northwestern University Feinberg School of Medicine, USA.
Received: 28-Aug-2013 Revised: 02-Oct-2013
Accepted: 12-Oct-2013 Published: 26-Oct-2013
Faulkner KF and Pisoni DB. Some observations about cochlear implants: challenges and future directions. Neurosci Discov. 2013; 1:9. http://dx.doi.org/10.7243/2052-6946-1-9
Copyright © 2015 Herbert Publications Limited. All rights reserved.