How many electrodes in cochlear implant




















The challenge we face is finding out how to stimulate electrically auditory neurons so that meaningful information about speech is conveyed to the brain. The electrical stimulation should, for example, convey information about the amplitude and the frequency of the acoustic signal. This leads us to the question "How does the auditory system encode frequencies? Different frequencies cause maximum vibration amplitude at different points along the basilar membrane see Figure 3.

Low frequency sounds create traveling waves in the fluids of the cochlea that cause the basilar membrane to vibrate with largest amplitude of displacement at the apex see Figure 3 of the basilar membrane. On the other hand, high frequency sounds create traveling waves with largest amplitude of displacement at the base near the stapes of the basilar membrane. If the signal is composed of multiple frequencies, then the resulting traveling wave will create maximum displacement at different points along the basilar membrane.

The cochlea therefore acts like a spectrum analyzer. It decomposes complex sounds into their frequency components. The cochlea is one of the mechanisms used by our auditory system for encoding frequencies. The traveling wave of the basilar membrane in the cochlea vibrates with maximum amplitude at a place along the cochlea that is dependent on the frequency of stimulation. The corresponding hair cells bent by the displacement in the membrane stimulate adjacent nerve fibers, which are organized according to the frequency at which they are most sensitive.

Each place or location in the cochlea is therefore responding "best" to a particular frequency. This mechanism for determining frequency is referred to as place theory. The place mechanism for coding frequencies has motivated multi-channel cochlear implants. Another theory, called volley theory , suggests that frequency is determined by the rate at which the neurons are fired.

According to the volley theory, the auditory nerve fibers fire at rates proportional to the period of the input signal up to frequencies of Hz.

At low frequencies, individual nerve fibers fire at each cycle of the stimulus, i. At high frequencies, frequency is indicated by the organized firing of groups of nerve fibers. Several cochlear implant devices have been developed over the years [1]. All the implant devices have the following features in common: a microphone that picks up the sound, a signal processor that converts the sound into electrical signals, a transmission system that transmits the electrical signals to the implanted electrodes, and an electrode or an electrode array consisting of multiple electrodes that is inserted into the cochlea by a surgeon Figure 4.

In single-channel implants only one electrode is used. In multi-channel cochlear implants, an electrode array is inserted in the cochlea so that different auditory nerve fibers can be stimulated at different places in the cochlea, thereby exploiting the place mechanism for coding frequencies. Different electrodes are stimulated depending on the frequency of the signal. Electrodes near the base of the cochlea are stimulated with high frequency signals, while electrodes near the apex are stimulated with low frequency signals.

The signal processor is responsible for breaking the input signal into different frequency bands or channels and delivering the filtered signals to the appropriate electrodes. The main function of the signal processor is to decompose the input signal into its frequency components, much like a healthy cochlea analyzes the input signal into its frequency components.

The designers of cochlear prosthesis are faced with the challenge of developing signal processing techniques that mimic the function of a healthy cochlea. The cochlear implant is based on the idea that there are enough auditory nerve fibers left for stimulation in the vicinity of the electrodes.

Once the nerve fibers are stimulated, they fire and propagate neural impulses to the brain. The brain interprets them as sounds. The perceived loudness of the sound may depend on the number of nerve fibers activated and their rates of firing. If a large number of nerve fibers is activated, then the sound is perceived as loud.

Likewise, if a small number of nerve fibers is activated, then the sound is perceived as soft. The number of fibers activated is a function of the amplitude of the stimulus current. The loudness of the sound can therefore be controlled by varying the amplitude of the stimulus current. The pitch on the other hand is related to the place in the cochlea that is being stimulated.

Low pitch sensations are elicited when electrodes near the apex are stimulated, while high pitch sensations are elicited when electrodes near the base are stimulated.

In summary, the implant can effectively transmit information to the brain about the loudness of the sound, which is a function of the amplitude of the stimulus current, and the pitch, which is a function of the place in the cochlea being stimulated.

Figure 4 shows, as an example, the operation of a four-channel implant. Sound is picked up by a microphone and sent to a speech processor box the size of a pager worn by the patient. The sound is then processed through a set of four bandpass filters which divide the acoustic waveform into four channels. Current pulses are generated with amplitudes proportional to the energy of each channel, and transmitted to the four electrodes through a radio-frequency link.

The relative amplitudes of the current pulses delivered to the electrodes reflect the spectral content of the input signal Figure 4.

For instance, if the speech signal contains mostly high frequency information e. Similarly, if the speech signal contains mostly low frequency information e. The electrodes are therefore stimulated according to the energy level of each frequency channel. Figure 4 showed one type of cochlear implant that is being used. Several other types of implant devices have been developed over the years [1]. These devices differ in the following characteristics:.

A brief description of each of the above device characteristics is given below. The design of electrodes for cochlear prosthesis has been the focus of research for over two decades [10][11]. Some of the issues associated with electrode design are: 1 electrode placement, 2 number of electrodes and spacing of contacts, 3 orientation of electrodes with respect to the excitable tissue, and 4 electrode configuration. Electrodes may be placed near the round window of the cochlea extracochlear , or in the scala tympani intracochlear or on the surface of the cochlear nucleus.

Most commonly, the electrodes are placed in the scala tympani because it brings the electrodes in close proximity with auditory neurons which lie along the length of the cochlea. This electrode placement is preferred because it preserves the "place" mechanism used in a normal cochlea for coding frequencies. That is, auditory neurons that are "tuned" for high frequencies are stimulated whenever the electrodes near the base are stimulated, whereas auditory neurons that are "tuned" for low frequencies are stimulated whenever the electrodes near the apex are stimulated.

In most cases, the electrode arrays can be inserted in the scala tympani to depths of mm within the cochlea. The number of electrodes as well as the spacing between the electrodes affects the place resolution for coding frequencies. In principle, the larger the number of electrodes, the finer the place resolution for coding frequencies.

Frequency coding is constrained, however, by two factors which are inherent in the design of cochlear prosthesis: 1 number of surviving auditory neurons that can be stimulated at a particular site in the cochlea, and 2 spread of excitation associated with electrical stimulation. Unfortunately, there is not much that can be done about the first problem, because it depends on the etiology of deafness. Ideally, we would like to have surviving auditory neurons lying along the length of the cochlea.

Such a neuron survival pattern would support a good frequency representation through the use of multiple electrodes, each stimulating a different site in the cochlea. At the other extreme, consider the situation where the number of surviving auditory neurons is restricted to a small area in the cochlea. In that situation, a few electrodes implanted. Device Electrodes. Type of stimulation. Table 1: Characteristics of commercially available cochlear implant devices.

So, using a large number of electrodes will not necessarily result in better performance, because frequency coding is constrained by the number of surviving auditory neurons that can be stimulated. In addition, frequency coding is constrained by the spread of excitation caused by electrical stimulation. When electric current is injected to the cochlea, it tends to spread out symmetrically from the source. As a result, the current stimulus does not stimulate just a single isolated site of auditory neurons, but several.

Such a spread in excitation is most prominent in monopolar electrode configuration. In this configuration, the active electrode is located far from the reference electrode, which acts as a ground for all electrodes see Figure 5. The spread of excitation due to electrical stimulation can be constrained to a degree by using a bipolar electrode configuration.

In the bipolar configuration, the active and the reference ground electrodes are placed close to each other Figure 5. Bipolar electrodes have been shown to produce a more localized stimulation than monopolar electrodes [12][13].

Although the patterns of electrical stimulation produced by monopolar and bipolar configurations are different, it is still not clear which of the two electrode configurations will result in better performance for a particular patient. Currently, some implant devices employ monopolar electrodes, other devices employ bipolar electrodes and other devices provide both type of electrodes.

Table 1 shows a list of current implant devices and their characteristics. The Ineraid also called Symbion device uses 6 electrodes spaced 4 mm apart. Only the four most apical electrodes are used in monopolar configuration. The Nucleus device uses 22 electrodes spaced 0. Electrodes that are 1. The Clarion device provides both monopolar and bipolar configurations. Eight electrodes are used which are spaced 2 mm apart. The Med-El device uses eight electrodes spaced 2.

There are generally two types of stimulation depending on how information is presented to the electrodes. If the information is presented in analog form, then the stimulation is referred to as analog stimulation, and if the information is presented in pulses, then the stimulation is referred to as pulsatile stimulation. In analog stimulation, an electrical analog of the acoustic waveform itself is presented to the electrode.

In multi-channel implants, the acoustic waveform is bandpass filtered, and the filtered waveforms are presented to all electrodes simultaneously in analog form. One disadvantage of analog stimulation is that the simultaneous stimulation may cause channel interactions. In pulsatile stimulation, the information is delivered to the electrodes using a set of narrow pulses. In some devices, the amplitudes of these pulses are extracted from the envelopes of the filtered waveforms Figure 4.

The advantage of this type of stimulation is that the pulses can be delivered in a non-overlapping i. The rate at which these pulses are delivered to the electrodes, i.

High pulse rates tend to yield better performance than low pulse rates. Once the electrodes are in place, how do signals get transmitted from the external processor to the implanted electrodes? There are currently two ways of transmitting the signals: 1 through a transcutaneous connection and 2 through a percutaneous connection see Figure 6. The transcutaneous system transmits the stimuli through a radio frequency link.

In this system, an external transmitter is used to encode the stimulus information for radio-frequency transmission from an external coil to an implanted coil. The internal receiver decodes the signal and delivers the stimuli to the electrodes Figure 6.

The transmitter and the implanted receiver are held in place on the scalp by a magnet. The advantage of this system is that the skin in the scalp is closed after the operation, thus avoiding possible infection.

The disadvantage of this system is that the implanted electronics i. Another disadvantage of this system is that the transcutaneous connector contains magnetic materials which are incompatible with MRI scanners. Most cochlear implant devices e. The percutaneous system transmits the stimuli to the electrodes directly through plug connections Figure 6. In this system, there are no implanted electronics, other than the electrodes. The major advantage of the percutaneous system is flexibility and signal transparency.

The signal transmission is in no way constrained by the implanted receiver circuitry. It is therefore ideal for research purposes for investigating new signal processing techniques. The Ineraid device is the only device that uses percutaneous connectors. The last, and perhaps most important, difference among implant devices is in the signal processing strategy used for transforming the speech signal to electrical stimuli.

Several signal processing techniques have been developed over the past 25 years. Some of these techniques are aimed at preserving waveform information, others are aimed at preserving envelope information, and others are aimed at preserving spectral features e. A more detailed description of each of these signal processing techniques will be presented in the following sections.

Representative results for each signal processing strategy will be presented. Not all people with hearing impairment are candidates for cochlear implantation. Certain audiological criteria need to be met. First, the hearing loss has to be severe or profound and it has to be bilateral i.

Profound deafness [15] is defined as a hearing loss of 90 dB or more. Hearing loss is typically measured as the average of pure tone hearing thresholds at , and Hz expressed in dB with reference to normal thresholds. Patient's speech perception abilities are typically evaluated using sentence, monosyllabic word, vowel and consonant tests.

Implant patients tend to achieve higher scores on sentence tests than on any other test. This is because they can use higher level knowledge such as grammar, context, semantics, etc. For example, a patient might only hear the first two words and the final word in a sentence, but can use context to "fill in" the blanks.

Sentence tests are considered to be open sets because the patient does not know the list of all possible word choices.

Tests of vowel and consonant recognition, on the other hand, are considered closed-set tests. In these tests the patient knows all of the possible choices, but the tests themselves are not necessarily easier because all the items in the list are phonetically similar. In a vowel test for example, the patient may listen to words like "heed, had, hod, head, hud, hid, hood, who'd" which only differ in the middle segment i.

Vowel and consonant tests are aimed at assessing a patient's ability to resolve spectral and temporal information. The most difficult test, by far, is the recognition of monosyllabic words. One such test, the NU-6 word lists, was developed by Northwestern University and consists of lists of 50 monosyllable words [16]. Other standardized tests include the recognition of keywords from the Central Institute for the Deaf CID sentences of everyday speech, recognition of 25 two-syllable words spondees , and the Iowa test [17] which consists of sentences, vowels and consonants recorded in a laserdisc in audio, visual, and audio-visual format.

Different tests are used to evaluate the speech perception abilities of children. These tests are specially designed to reflect the language and vocabulary level of the child. It makes no sense, for example, to include the word or picture of a "turtle" in the test, if the child does not know what a turtle is.

A good review on various tests developed to evaluate the speech perception abilities of children can be found in [18]. Single-channel implants provide electrical stimulation at a single site in the cochlea using a single electrode. These implants are of interest because of their simplicity in design and their low cost compared to multi-channel implants. They are also appealing because they do not require much hardware and conceivably all the electronics could be packaged into a behind-the-ear device.

Single-channel implants were first implanted in human subjects in the early s. At the time, there was a lot of skepticism about whether single-channel stimulation could really work [19]. Doctors and scientists argued that electrical stimulation of the auditory nerve could produce nothing but noise.

Despite the controversy, researchers in United States and in Europe kept working on the development of single-channel prosthesis. The House single-channel implant was originally developed by William House and his associates in the early s [20][21].

The acoustic signal is picked up by a microphone, amplified, and then processed through a Hz bandpass filter. The bandpassed signal is then used to modulate a 16 kHz carrier signal. The modulated signal goes through an output amplifier and is applied to an external induction coil. The output amplifier allows the patient to control the intensity of the stimulation.

The output of the implanted coil is finally sent without any demodulation to the implanted active electrode in the scala tympani. Information about gross temporal fluctuations of the speech signal are contained in the envelope of the modulated signal. For sound pressures between 55 dB to 70 dB, the envelope output changes linearly, but for sound pressures above 70 dB, the envelope output saturates at a level just below the patient's level of discomfort.

That is, for speech signals above 70 dB the envelope output is clipped see example in Figure 8. Consequently, the temporal details in the speech signal may be distorted or discarded. The periodicity, however, of the signal is preserved. As shown in Figure 8, bursts of the 16 kHz carrier appear to be in synchrony with the period of voiced segments as well as other low-energy segments of the input signal.

Rosen et al. Only exceptional patients were able to obtain scores above zero on monosyllabic word NU-6 identification. In a study by Danhauer et al. The Vienna single-channel implant was developed at the Technical University of Vienna, Austria, in the early s [26].

The signal is first pre-amplified and then compressed using a gain-controlled amplifier with a short attack time 0. The amount of compression is adjusted according to the patient's dynamic range.

The compressed signal is then fed through a frequency-equalization filter Figure 10 which also attenuates frequencies outside the range Hz. The filtered signal is amplitude modulated for transcutaneous transmission. The implanted receiver demodulates the radio-frequency signal, and sends the demodulated stimuli to the implanted electrode. The automatic gain control ensures that the temporal details in the analog waveform are preserved regardless of the input signal level.

It therefore prevents high-level input signals from being clipped. The frequency-equalization filter ensures that all frequencies in the range of Hz to 4 kHz, which are very important for understanding speech, are audible to the patients.

Without the equalization filter, only low frequency signals would be audible. This is because the electrical threshold level i. The frequency response of the equalization filter Figure 10 is adjusted for each patient so that sinusoids with frequencies in the range of Hz to 4 kHz are equally loud. Hochmair-Desoyer et al. Unfortunately, not all patients did as well. Other researchers e. It was not surprising that relatively few patients could obtain open-set speech understanding with single-channel implants given the limited spectral information.

Single-channel stimulation does not exploit the place code mechanism used by a normal cochlea for encoding frequencies, since only a single site in the cochlea is being stimulated. Temporal encoding of frequency by single nerve fibers is restricted due to the neural refractory period to 1 kHz [31]. It is also conceivable that patients could extract frequency information from the periodicity of the input stimulus.

This is possible, but only for stimulus frequencies up to Hz. Experiments [27] showed that implant patients as well as normal-hearing listeners[32] can not discriminate differences in pitch for stimulus frequencies above Hz. Single-channel stimulation restricts the amount of spectral information that an implant patient can receive to frequencies below 1 kHz. This is not sufficient however for speech perception, because there is important information in the speech signal up to Hz, and beyond.

But, what kind of information is available in the speech signal below 1 kHz? The speech signal contains information about the fundamental frequency, the first formant F1, and sometimes depending on the vowel and the speaker the second formant, F2. The presence of fundamental frequency indicates the presence of voiced sounds e. Changes in fundamental frequency also give information about sentence prosody, i. Patients could also discriminate between certain vowels which differ in F1 frequency, i.

The transmitted frequency information however is limited and insufficient for speech recognition. Yet, some of the exceptional patients achieved high scores on open-set speech recognition tests. It remains a puzzle how some single-channel patients can perform so well given the limited spectral information they receive.

Unlike single-channel implants, multi-channel implants provide electrical stimulation at multiple sites in the cochlea using an array of electrodes. An electrode array is used so that different auditory nerve fibers can be stimulated at different places in the cochlea, thereby exploiting the place mechanism for coding frequencies. When multi-channel implants were introduced in the s, several questions were raised regarding multi-channel stimulation:.

How many electrodes should be used? If one channel of stimulation is not sufficient for speech perception, then how many channels are needed to obtain high levels of speech understanding? Since more than one electrode will be stimulated, what kind of information should be transmitted to each electrode?

Should it be some type of spectral feature or attribute of the speech signal that is known to be important for speech perception e. Researchers experimented with different number of electrodes.

Some devices used a large number of electrodes 22 but only stimulated a few, while other devices used a few electrodes and stimulated all of them. The answer to the question on how many channels are needed to obtain high levels of speech understanding is still the subject of debate e. Depending on how researchers tried to address the second question, different types of signal processing techniques were developed. The various signal processing strategies developed for multi-channel cochlear prosthesis can be.

These strategies differ in the way information is extracted from the speech signal and presented to the electrodes. The waveform strategies try to present some type of waveform in analog or pulsatile form derived by filtering the speech signal into different frequency bands, while the feature extraction strategies try to present some type of spectral features, such as formants, derived using feature extraction algorithms. A review of these signal processing strategies is given next, starting with waveform strategies and continuing with feature extraction strategies.

The compressed-analog CA approach was originally used in the Ineraid device manufactured by Symbion, Inc. The block diagram of the CA approach is shown in Figure The signal is first compressed using an automatic gain control, and then filtered into four contiguous frequency bands, with center frequencies at 0.

The filtered waveforms go through adjustable gain controls and then are sent directly through a percutaneous connection to four intracochlear electrodes. The filtered waveforms are delivered simultaneously to four electrodes in analog form. The electrodes, spaced 4mm apart, operate in monopolar configuration.

Figure 13 shows, as an example, the four bandpassed waveforms produced for the syllable "sa" using a simplified implementation of the CA approach. The CA approach, used in the Ineraid device, was very successful because it enabled many patients to obtain open-set speech understanding.

Dorman et al. The CA, multi-channel approach clearly yielded superior speech recognition performance over the single-channel approach [37]. This was not surprising given the increased frequency resolution provided by multiple channel stimulation. The CA approach uses analog stimulation that delivers four continuous analog waveforms to four electrodes simultaneously.

A major concern associated with simultaneous stimulation is the interaction between channels caused by the summation of electrical fields from individual electrodes [40]. Hence, having more electrodes would be theoretically beneficial in programming CIs in the presence of problem locations within the cochlea.

Stimulation Modes Another reason there is no "1-to-1" mapping between channels and electrodes is that the number of available electrodes for stimulation depends on the mode of stimulation used. There are 2 fundamental modes of stimulation employed by CIs to deliver the electrical current to the auditory nerves. The first is monopolar and the second is bipolar. In the monopolar stimulation mode there is one active electrode and one return electrode also called the ground electrode located in the implant package and current flows between the two electrodes.

Since the active and return electrodes are widely spaced the current spreads over a wider area stimulating a larger neuronal population. In the bipolar mode, 2 adjacent electrodes are paired as active and return and stimulation is tightly focused on a small population of auditory nerve fibers. Thus, 2 electrodes provide only one channel between them. For example, a 16 electrode system would deliver only 8 channels, and similarly a 24 electrode system would deliver 12 channels in the bipolar configuration.

There are distinct advantages to using each stimulation mode. For example one of the factors that determines loudness of sound is the number of neurons stimulated. In the monopolar mode, since there is a spread of current over a large number of neurons it can achieve higher loudness levels with lower current, in comparison to the bipolar mode.

However, in the bipolar mode, stimulation of electrodes in close proximity provides more spatially selective stimulation than monopolar mode see Osberger and Fisher, Thus, CIs can deliver monopolar and bipolar modes of stimulation, and the selection of either is dictated by individual needs and responses ability to produce adequate loudness with low current levels and stimulation strategy used simultaneous or non-simultaneous - please see section on Electrode design in multichannel CIs and anatomy of the cochlea.

Single Channel and Multichannel To maximally transfer acoustic information to the brain via electrical signals, all CI systems need to closely adhere to the ''natural'' laws of signal transfer, as demonstrated and employed by the normally functioning cochlea. That is, researchers must consider how best to code speech signals electrically, in a way that will make best use of the cochlea's anatomic organization and mimic the natural processes occurring within the auditory system.

The information contained in the speech signal can be grossly divided into intensity and frequency subcomponents. Each of these can be electrically transferred via the CI to the brain. Intensity coding is achieved by manipulation of electrical current pulse width, pulse height and by the quantity of auditory nerve fibers stimulated in the cochlea. Lining the cochlea are many thousands of hair cells that convert the sound into electrical signals. Cochlear implants only have up to a couple of dozen electrodes, each of which performs a similar function to a hair cell or group of hair cells.

Boxes B, C, and D illustrate the cochlea in an unrolled configuration. The base of the cochlea, which is where the sound enters, responds to the highest pitches. This is illustrated in box B. The apex, or innermost part of the cochlea, responds to the low-frequency tones, shown in box D. The locations in between the base and the apex correspond to the range of frequencies in between the two extremes.

Cochlear implants divide the sound into channels, and eventually drive electrodes, or pairs of electrodes in the case of MED-EL.

Speech perception can be quite good in quiet situations even with just eight channels, providing quality similar to that of the telephone network. This is great news for cochlear implant manufacturers and users. AB recipients can use fine spectral and temporal information to hear sound accurately, enabling them to better understand tonal information in speech and to enjoy music.

The implant together with the sound processor build a closed loop that ensure proper functioning of the system. Designed for a gentle cochlear insertion 11,12,13 the electrodes deliver spectral bands of sound to help you understand speech and enjoy music. Advanced technology designed to support current and future generations of sound processors and features. This sophisticated communication link receives digital representations of sound from the external sound processor and sends information about the status of the implant system and your hearing nerve back to the processor.

The multi-magnet assembly in the HiRes Ultra 3D implant allow users to safely undergo high resolution imaging, such as 3.

Other implants have differing conditions for conducting safe MRI procedures. Both electrodes share the HiFocus design elements.

HiFocus electrode contacts are encased in a slim flexible tapered silicone carrier to minimize insertion forces and damage to cochlear structures during surgery. By minimizing cochlear disruption, HiFocus electrodes offer an increased opportunity for better hearing outcomes. The HiFocus SlimJ or HiFocus Mid-Scala electrode provide the surgeon with maximum surgical flexibility based upon surgical preference while maintaining patient performance.

It is offered as a straight electrode with a gentle curvature, designed to be easily and smoothly inserted by freehand technique or with forceps.

The main benefit of the gentle curvature next to easy insertion is to ensure electrode movement in the apical direction. Key to the design are the elements that allow a surgeon to easily handle the electrode in the surgical space and insert with minimal trauma to the delicate cochlea structures.

The wing feature allows for the best possible visualization of the cochlea, and precise control of the angle and speed of insertion. It provides an easy area for a surgeon to hold and control the electrode, even into the facial recess. The tip feature is intended to ease the insertion through the round window. Graph showing angular insertion depths of HiFocus SlimJ across 40 samples Cochlear structure preservation allows for the best possible hearing outcomes in recipients.

Studies have shown that recipients may perform better when cochlear structures are undamaged by the electrode insertion. The HiFocus SlimJ preserves cochlear structures better than any other lateral wall electrode tested to date. Key to the design are the precurved shape, allowing the HiFocus Mid-Scala electrode to be inserted consistently with minimal cochlea trauma, 18 a straight tip region to avoid tip fold overs, and if desired, the electrode can be loaded on a dedicated insertion tool to support a controlled insertion.

It can be inserted using the round window, extended round window, or small cochleostomy approach, requiring only a 0.



0コメント

  • 1000 / 1000