Abstract
On the one hand, processors for hearing aids are highly specialized for audio processing, on the other hand they have to meet challenging hardware restrictions. This paper aims to provide an overview of the requirements, architectures, and implementations of these processors. Special attention is given to the increasingly common application-specific instruction-set processors (ASIPs). The main focus of this paper lies on hardware-related aspects such as the processor architecture, the interfaces, the application specific integrated circuit (ASIC) technology, and the operating conditions. The different hearing aid implementations are compared in terms of power consumption, silicon area, and computing performance for the algorithms used. Challenges for the design of future hearing aid processors are discussed based on current trends and developments.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Modernhearing aids, like the one shown in Fig. 1, have to meeta variety of technical requirements. First of all, the power consumption of hearing aids is limited. To achieve an acceptable battery life, the average power consumption of hearing aids should be in the range of a few milliwatts. The reason for the low energy budget is the small physical size of battery-powered hearing aids. At the same time, the demand for more audio processing performance and memory capacity is steadily growing. There are newly developed algorithms with more or improved features and increasing demands on audio quality. In addition, it is required that hearing aids can be individually fitted to the hearing aid user, adapt to constantly changing environmental conditions, and connect wirelessly to other electronic devices. These requirements and the high degree of flexibility and programmability make the hardware design for hearing aid devices challenging.
Figure 2 shows the system components and peripherals of a state-of-the-art hearing aid. Typically, hearing aids contain a central processing unit that provides the functionality and connects all other components such as the receiver and microphones. The design and implementation of the processor is challenging and involves numerous trade-offs due to the wide range of requirements and the large design space. The implementation alternatives in a multi-dimensional design space are shown in Fig. 3. Various hearing aid processor architectures and implementations were introduced in the literature. There are analog, mixed-signal, or purely digital hearing aids. Some use hard-wired processing and control circuits, others use fully programmable application-specific instruction-set processors (ASIPs) with custom instructions and hardware accelerators. This survey compares these hearing aids from a hardware perspective.
The main focus of related work, i.e., technical studies, surveys or overview papers, addressing the hearing aid signal processing, is on current and future hearing aid algorithms. The first related study [26] presents state-of-the-art signal processing in hearing aids. Among the studied techniques and algorithms are feedback reduction, directional microphones, environment recognition, and noise and distortion reduction. Current limitations as well as future trends like binaural and music processing are addressed. Binaural processing as a future hearing aid processing technique is also covered in the two surveys [21, 55] from 2005 and 2009. Both papers present state-of-the-art, challenges and future trends of signal processing in hearing aids. Physiological requirements due to hearing impairment are described as well as the different audio processing methods such as directional microphones, noise reduction, acoustic feedback suppression, classification, and compression. No related work focuses on the hardware perspective. Little information is given about the hardware architectures, circuit implementation, and the different design methods. This survey covers these hardware-related topics, including a review of the current processor architectures. A particular focus is on application-specific instruction-set processors (ASIPs).
The structure of this survey is as follows: Section 2 provides a list of the algorithms that are implemented on hearing aid processors, which are part of this survey. The hearing aid processors are described in detail in Section 3. The differences in the architecture of hard-wired and ASIP-based hearing aid processors are discussed. The remaining sections cover the ASIC technology and supply voltage (Section 4), power consumption (Section 5), silicon area (Section 6), operating clock frequency (Section 7), audio datapath width (Section 8) and on-chip memory in ASIP-based hearing aid systems (Section 9). Section 10 concludes this paper and points out possible future trends.
2 Algorithms for Hearing Aid Devices
A typical high-end hearing aid processing is shown as a block diagram in Fig. 4. Multiple microphones enable directional filtering. Therefore, beamforming (BMF) and adaptive directional microphone (ADM) algorithms are the first in the chain and aim to increase the signal-to-noise ratio (SNR) by performing directional filtering. Feedback is then suppressed with a feedback cancellation (FBC) algorithm by analyzing the output signal and detecting feedback loops. The algorithms that process frequency domain data, such as the noise reduction (NR) and dynamic range compression (DRC) algorithms, require an analysis and synthesis filter bank. Classification algorithms generally generate control signals for the processing chain. A list of algorithms is included in Table 1. This list contains exclusively algorithms that are part of a processing chain in state-of-the-art hearing aids. Publications with the implementation, optimization, and application of these algorithms on the state-of-the-art hearing aid processors are also included in Table 1. There is a trend towards algorithms, that are computationally more demanding. In recent years, algorithms for machine learning and deep learning [40, 44, 52] and binaural processing algorithms [46] are used. Recently proposed algorithms of this kind [57, 67, 69], of which no implementation details on a hearing aid processor are known, are not listed in Table 1.
3 Hearing Aid Processor Architectures
In recent decades, various hearing aid processors have been proposed in the literature. All hearing aid processors are subject to comparable strict requirements regarding limited energy budget, available chip area, and performance requirements. However, a wide range of different architectures, algorithms, approaches, and technologies were introduced to meet these stringent requirements. 30 research and commercial processors published between 1996 and 2020 are listed in Table 2. This table provides a comparison of the architecture, ASIC technology, supply voltage, average power consumption, silicon area, and operating clock frequency of the various hearing aid systems.
The processor architectures are designed and optimized to efficiently execute particular hearing aid algorithms listed in Table 1. The architectures of these processors can be divided into three main classes: hard-wired with dedicated processing blocks, ASIPs, and ASIPs with hardware accelerators.
3.1 Hard-Wired Architectures
In case of a hard-wired architecture, all parts of the hearing aid processing chain are implemented by dedicated circuits. Their fundamental function is fixed and can only be changed before manufacturing. There are pure analog [45, 64, 70], mixed-signal [12, 17, 30,31,32], or pure digital [3, 65, 71, 72, 74] hard-wired hearing aids.
A digital hard-wired hearing aid is highlighted in the following. This dedicated architecture, originally proposed in [72] and published in 2014, is characterized by its flexibility compared to related architectures. It includes a core-based architecture consisting of a memory management unit for data exchange, a control unit, and an arithmetic unit for processing. Therefore, processing is easier to control compared to related architectures. Its block diagram, architecture, and die photo are shown in Fig. 5. The authors of [72] propose a sample-based perceptual multiband noise reduction algorithm as a basis for the design of a dedicated digital hearing aid architecture. This noise reduction algorithm is part of the hearing processing chain and is represented as a block diagram in Fig. 5. An analysis and synthesis filter bank (AFB and SFB), a noise reduction (NR) algorithm [24, 42], an insertion gain (IG), and wide dynamic range compression (WDRC) are integrated on the hearing aid chip. The average power consumption is 83.7 μ W. A 90 nm ASIC technology is used and the digital core voltage is 0.6V.
3.2 Application-Specific Instruction-Set Processors
Application-specific instruction-set processor (ASIP) architectures include a digital signal processor (DSP) for signal processing [35, 47,48,49,50, 56]. The DSP architecture is optimized for the typical hearing aid algorithms, therefore it is here also denoted as an (ASIP). The target algorithms can be modified or replaced by changing the program code. This offers greater flexibility compared to hard-wired architectures. However, due to the higher flexibility offered by the processor architecture and the increased memory requirements, the power consumption and silicon area requirements are generally higher compared to hard-wired architectures. Instruction-level and data-level optimizations improve the efficiency of signal processing. New custom instructions increase processing performance.
A hearing aid with a DSP for signal processing is presented in [48] and its block diagram and photo are shown in Fig. 6. This hearing aid publication is highlighted as the authors propose an algorithm to silicon flow in addition to the proposed hearing aid chip. This flow is integrated into the chip design flow and supports accurate and fast simulations, ASIC synthesis, optimization and verification [48]. These tools are useful for handling the overall complexity and drastically decrease the design time, if the underlying ASIC technology is changed. The DSP architecture consists of a datapath with several general purpose execution units, a complex-valued multiplier, and a controller with a program read-only memory. The operating clock frequency of the DSP is reduced by increased parallelism and reduced memory accesses. A fast Fourier transform (FFT) algorithm case study for the architecture shows how a radix-8 implementation can minimize memory accesses and increase the number of parallel operations. Over 20 operations per cycle are achieved. In addition to clock gating and low voltage operation techniques, the authors propose to partition the datapath and the read-only memory (ROM) of the complete architecture. The underlying concept is that there are different types of operations that do not require the same hardware resources. This partitioning is implementedfor reasons of power consumption optimization and depends on the operation mode: FFT and non-FFT. Consequently, one of the ROMs needs to be accessed in each clock cycle, which reduces the power consumption of the DSP by about 40%. The DSP, the instruction read-only memorys (ROMs), and the parameter static random-access memorys (SRAMs) are integrated in one hearing aid chip. The final chip consumes on average 0.66 mW at a core voltage of 1.05 V. The analog front end including a digital-to-analog converter (DAC), programmable gain amplifier, and a serial interface are integrated on a separate chip [34].
3.3 ASIPs with Hardware Accelerators
There are hearing aid processing architectures that combine ASIPs with dedicated hard-wired accelerators. These accelerators are used for frequent and computationally intensive tasks. The flexibility and complexity of these accelerators varies. A list of accelerators for hearing aids can be found in Table 3. The hearing aid processing task is mapped to either the ASIP or the accelerator. The goal is to process the intensive computing tasks on the accelerator, while the ASIP performs computations in parallel and controls the accelerator processing [54].
The block diagram of a highlighted ASIP with accelerators [19] is shown in Fig. 7. The corresponding layout view is shown in Fig. 8. This research hearing system on chip contains four ASIPs on one chip to test different processor and algorithm configurations for processing performance and power efficiency. The ten co-processors, which can be used in parallel, accelerate the computation of the coordinate rotation digital computer (CORDIC) algorithm for hyperbolic and trigonometric functions such as sine, cosine, square root, exponential, tangent, and division with an average speed-up of 28 compared to a software implementation on the ASIP. The ASIP can configure several accelerators with different operating modes to calculate different results in parallel.
Using the same hardware accelerator for different audio signal processing tasks is also applied in other related work. In [40], an arithmetic unit with a dual MAC and butterfly unit can operate either in FFT mode or in CNN mode. By sharing hardware resources, 42% of hardware complexity can be saved. In [54], a streaming DSP hardware accelerator is introduced that can compute applications such as keyword recognition or other algorithms for classifications. Any of the co-processors in [19] can be disabled by clock gating, however, these operations are elementary and are often used in hearing aid applications. This also applies to the FIR filter accelerators presented in [4, 51]. The accelerators presented in [4,5,6, 9, 25, 33, 63] are more complex and specific, because they implement complete algorithms, such as noise reduction (NR), feedback cancellation (FBC) or others, in hardware. One advantage is the efficiency gained by the hard-wired implementation. If an algorithm needs to be changed, it is possible to use ASIP processing resources instead of the accelerators.
4 ASIC Technology and Supply Voltage
The advantages of the steadily decreasing feature sizes of CMOS semiconductor technology are exploited in commercial and research hearing aids. The feature sizes of modern hearing aids from 1996 to 2020 are shown in Fig. 9. Hearing aids with an analog front end (AFE), including analog-to-digital converters (ADCs), programmable gain amplifiers (PGAs), or digital-to-analog converters (DACs), are marked. These hearing aids are either mixed-signal or analog hearing aid designs, which have on average larger feature sizes due to more restrictive design rules and greater sensitivity to noise [61]. To overcome these limitations, the authors of [19, 34, 48, 53] propose a chip-level integration with two separate chips. Each chip is integrated with a different ASIC technology, to independently utilize the more appropriate feature size for both, the digital and the analog components of the hearing aid. The rate, at which the feature size shrinks, decreased significantly for hearing aid implementations in recent years. This is due to the higher costs for the design and manufacturing with smaller feature sizes [61]. The supply voltages of hearing aid implementations are shown in Fig. 10. Since the feature size remained almost constant over the last years (Fig. 9), the supply voltage also remains almost constant (Fig. 10). This is especially noticeable for hearing aids with analog components. The lowest supply voltages of 0.55 V to 0.8 V are used in digital hearing aid designs. Those hearing aid implementations, that employ undervoltage techniques through dynamic voltage scaling and use voltages close to the threshold voltage, are listed in Table 4.
5 Power Consumption
The average power consumption determines the battery life of the hearing aids. During normal operation, all components of the hearing aid processing system are usually constantly active. The average power consumption for the hearing aid implementations is shown in Fig. 11. The computational complexity of the algorithms determines, among other things, the power consumption. The lowest achieved average power consumption for the given implementations is 10 μ W. The hearing aids [74] and [51] consume this power for an adaptive signal-to-noise ratio (SNR) monitor based on an envelope detection and adaptive FIR and IIR filter calculations. On the other hand, when targeting hearables or smart headphones instead of hearing aid devices, deep-learning based noise reduction techniques require an average power consumption up to 4 mW [54]. Hard-wired architectures offer a comparatively low-power consumption compared to the ASIP architectures. The power distribution for the hardware components of the mixed-signal hearing aid [5] is 36% for the analog front end, 39% for the digital signal processor (DSP), 11% for the power on reset circuit and 13% for the remaining components. The digital signal processor of the hearing aid presented in [9], on the other hand, consumes up to 71%, while the analog parts consume the remaining 29%.
6 Silicon Area
The silicon area for each hearing aid is shown in Fig. 12. The analog front-end or wireless connection modules, which are not part of every hearing aid, require additional silicon area, which must be considered when comparing implementations. The area distribution for the mixed-signal hearing aid, which is presented in [9, 25], is 30% for the analog and 70% for the digital part. The digital part consists of a 24 bit application-specific instruction-set processor and five dedicated accelerators. The analog part consists of an audio front end with a programmable gain amplifier (PGA), an analogto- digital converter (ADC) and a class-D amplifier for the pulse density modulation (PDM) output. The total size is 9.50 mm2 and this is the maximum chip size since 2004. The analog hearing aid presented in [70], which is manufactured using a 0.13 μ m and a 0. 35 μ m technology, requires 66% of the area for the automatic gain control, 15% for the driver and 20% for the filter circuit. The wireless control part of the analog hearing aid, which presented in [12], is based on a dual tone multi frequency (DTMF) receiver, occupies 1.16×4.6 mm2, which is 16% of the total chip size of 5.7×4.9 mm2. The silicon area of a hearing aid may be pad limited. As a result, the total area is larger than effectively required for the digital or analog core parts. This is the case for the second largest ASIP-based hearing aid system in this study, which does not include an analog front-end [48]. Its size is 20 mm2.
7 Operating Clock Frequency
The required operating clock frequency depends on the computing complexity of the hearing aid algorithms and the architecture-dependent processing power of the digital signa l processing system (Fig. 13). Most hard-wired hearing aids operate at comparatively low operating clock frequencies around 0.032 MHz to 8.000 MHz. The processing is sample-based, i.e., each processing unit or component like a digital filter or amplifier processes one sample per clock cycle. In [71, 72], a more computationally intensive sample-based processing is applied, using a noise reduction algorithm based on multiband spectral subtraction and an enhanced entropy voice activity detection. The audio samples are stored in local ping-pong buffer and processed sequentially for each sub-band at a clock frequency of 3MHz to 8MHz for the various processing blocks. Digital hearing aids with an application-specific instruction-set processor as the central processing unit require somewhere in the region of a thousand instructions to process the algorithms. An implementation of a related noise reduction algorithms (mband) on an ASIP with hardware accelerators [33] needs 2176 cycles for computation. Parallelism at data or instruction level, or application-specific instructions [4, 5, 9, 19, 25, 33, 35, 47, 51, 56] can reduce the clock frequency requirement. Accelerators are used for computing intensive tasks, where the pure software implementation on an ASIP is not feasible.
8 Audio Datapath Width
All digital hearing aids presented in this survey use fixed-point hardware architectures for signal processing, due to lower hardware cost in terms of area and power requirements compared to floating-point hardware [35]. The audio datapath width of the fixed-point data, i.e., the number of bits per audio sample, is a crucial parameter for the design and implementation of hearing aids, as it determines the maximum achievable signal-to-noise ratio (SNR). A high SNR value is a strict requirement for hearing aids [5, 31]. Each additional datapath bit increases the SNR by about 6 dB. However, this parameter also affects the area, power consumption, and processing performance of all components in the processing chain, digital processing blocks, memories, ADCs, and DACs [4, 19, 25, 56, 62]. The authors of [41] present a word length optimization to reduce the area and power of their MAC unit accelerator. They propose to optimize the number of bits based on the results of short-time objective intelligibility (STOI) measurements. Alternatively, signal-to-noise ratio (SNR) measurements are used in [33]. In [56], a 16-bit processor is extended with specific functional units that use 32-bit and 40-bit intermediate results to improve the fixed-point accuracy. Two separate processors are used in [62]. The 32-bit Arm Cortex M3 processor is used for debugging and wireless connectivity and the 24-bit ASIP processes the audio samples. In Table 5, a comparison of architectures implementing an audio datapath with fixed width is given. Most designs have a datapath width of 16-bit, for the digital and analog parts. The datapath width can be switched in some ASIP based architectures, which are listed in Table 6. This is possible by using different execution units with different datapath width, micro SIMD subword modes (single instruction multiple data) [39] or specialized accelerators. To take advantage of the increased dynamic range of floating-point data types, the architectures listed in Table 7 add hardware support for floating-point processing. The approaches used are block floating-point, static floating-point, or emulated floating-point.
9 Memory in Hearing Aid Systems
Due to strict power and area restrictions, on-chip memory is the only implementation option for the hearing aids listed in Table 8. On-chip area is limited and memory size is critical to the overall size of the chip. The area for the SRAM macros for the mixed-signal hearing aid presented in [9] is 1.35 mm2. Compared to the logic size of 5.39 mm2 and the analog size of 2.77 mm2 the area of the SRAM is 14% of the total chip size for a 130 nm ASIC technology. The memory size depends on the complexity and type of the audio processing algorithms. Algorithms with a comparably high memory requirements are those based on trained models or data. Among those are localization algorithms [46, 60], deep learning based speech enhancement and speech recognitionalgorithms [37, 40, 44, 52]. As an example, the gaussian mixture model (GMM) of the localization algorithm requires about 90% of the total memory requirement of this algorithm [46, 60]. In this case 44,400 of 48,816 words are required only for the trained model. Another example is the hearing aid with the highest amount of on-chip memory, which is designed for computing intensive task as neural networks for speech enhancement [40]. The hearing aid with the least amount of on-chip memory is designed for IIR filters [49, 50].
10 Conclusion and Future Trends
In this survey the state-of-the-art processor architectures for hearing aids are presented. Among these architectures are analog, mixed-signal, and digital processors. The main focus is on application-specific instruction-set processors (ASIPs), which are compared to dedicated hardware architectures and hearing aid systems with hardware accelerators. Trends for the ASIC technologies, average power consumption, silicon area, and operating clock frequencies are presented. There is a clear trend towards more flexibility and growing complexity of the algorithms. Especially the deep neural network based speech enhancement and binaural processing algorithms for sound source localization are of current interest. These algorithms with higher processing performance requirements have to be computed under the same strict constraints as power consumption and chip area.
References
Benesty, J., & Huang, Y. (2013). Adaptive signal processing: Applications to Real-World problems. Signals and communication technology. Berlin: Springer.
Berkeley Design Technology, Inc.: ON Semiconductor’s Hearing Aid SoCs: Distributed Performance That’s Easy on Batteries (2014). https://www.bdti.com/InsideDSP/2014/10/16/ONSemi.
Carbognani, F., Burgin, F., Henzen, L., Koch, H., Magdassian, H., Pedretti, C., Kaeslin, H., Felber, N., & Fichtner, W. (2005). A 0.67-mm2 45-μ w DSP VLSI implementation of an adaptive directional microphone for hearing aids. In Proceedings of the 2005 European conference on circuit theory and design, 2005. https://doi.org/10.1109/ECCTD.2005.1523080, (Vol. 3 pp. III/141–III/144).
Chang, K.C., Chen, Y.W., Kuo, Y.T., & Liu, C.W. (2012). A low power hearing aid computing platform using lightweight processing elements. In 2012 IEEE international symposium on Circuits and systems (ISCAS). https://doi.org/10.1109/ISCAS.2012.6271888 (pp. 2785–2788): IEEE.
Chen, C., & Chen, L. (2018). A 79-dB SNR 1.1-mW Fully Integrated Hearing Aid SoC. Circuits, Systems, and Signal Processing, pp. 1–17. https://doi.org/10.1007/s00034-018-1002-6018-.
Chen, C., Chen, L., Fan, J., Yu, Z., Yang, J., Hu, X., Hei, Y., & Zhang, F. (2016). A 1V, 1.1 mW mixed-signal hearing aid SoC in 0.13 um CMOS process. In 2016 IEEE international symposium on circuits and systems (ISCAS). https://doi.org/10.1109/ISCAS.2016.7527211 (pp. 225–228): IEEE.
Chen, J., Benesty, J., Huang, Y., & Doclo, S. (2006). New insights into the noise reduction Wiener filter. IEEE Transactions on Audio, Speech, and Language Processing, 14(4), 1218–1234. https://doi.org/10.1109/TSA.2005.860851.
Chen, J., & Moore, B.C. (2013). Effect of individually tailored spectral change enhancement on speech intelligibility and quality for hearing-impaired listeners. In 2013 IEEE International conference on acoustics, speech and signal processing. https://doi.org/10.1109/ICASSP.2013.6639353 (pp. 8643–8647): IEEE.
Chen, L.M., Yu, Z.H., Chen, C.Y., Hu, X.Y., Fan, J., Yang, J., & Hei, Y. (2015). A 1-V, 1.2-mA fully integrated SoC for digital hearing aids. Microelectronics Journal, 46(1), 12–19. https://doi.org/10.1016/j.mejo.2014.09.013.
Chi, H.F., Gao, S.X., & Soli, S.D. (1999). A novel approach of adaptive feedback cancellation for hearing aids. In 1999 IEEE International symposium on circuits and systems (ISCAS). https://doi.org/10.1109/ISCAS.1999.778818, (Vol. 3 pp. 195–198): IEEE.
Doclo, S. (2010). Distributed Microphone Array Signal Distributed Microphone Array Signal Processing for Hearing Aids. Tech. rep., Signal Processing Group University of Oldenburg.
Duque-Carrillo, J.F., Malcovati, P., Maloberti, F., Pérez-Aloe, R., Reyes, A.H., Sánchez-Sinencio, E., Torelli, G., & Valverde, J.M. (1996). VERDI: an acoustically programmable and adjustable CMOS mixed-mode signal processor for hearing aid applications. IEEE Journal of Solid-State Circuits, 31(5), 634–645. https://doi.org/10.1109/4.509846.
El-Maleh, K., & Kabal, P. (1997). Comparison of voice activity detection algorithms for wireless personal communications systems. In CCECE’97. Canadian Conference on electrical and computer engineering. Engineering innovation: Voyage of discovery. Conference proceedings. https://doi.org/10.1109/CCECE.1997.608260, (Vol. 2 pp. 470–473): IEEE.
Ephraim, Y., & Malah, D. (1984). Speech enhancement using a minimum-mean square error short-time spectral amplitude estimator. IEEE Transactions on Acoustics, Speech, and Signal Processing, 32(6), 1109–1121. https://doi.org/10.1109/TASSP.1984.1164453.
Ephraim, Y., & Malah, D. (1985). Speech enhancement using a minimum mean-square error log-spectral amplitude estimator. IEEE Transactions on Acoustics, Speech, and Signal Processing, 33(2), 443–445. https://doi.org/10.1109/TASSP.1985.1164550.
Farhang-Boroujeny, B., & Wang, Z. (1997). Adaptive filtering in subbands: Design issues and experimental results for acoustic echo cancellation. Signal Processing, 61(3), 213–223. https://doi.org/10.1016/S0165-1684(97)00106-0.
Gata, D., Sjursen, W., Hochschild, J., Fattaruso, J., Fang, L., Iannelli, G., Jiang, Z., Branch, C., Holmes, J., Skorez, M., et al. (2003). A 1.1-V 270-μ a mixed-signal hearing aid chip. IEEE Journal of Solid-state Circuits, 38(4), 683-683. https://doi.org/10.1109/JSSC.2002.804328.
Gerlach, L., Payá-Vayá, G., & Blume, H. (2016). Efficient emulation of Floating-Point arithmetic on Fixed-Point SIMD processors. In Signal processing systems (siPS), 2016 IEEE international workshop on. https://doi.org/10.1109/SiPS.2016.52 (pp. 254–259): IEEE.
Gerlach, L., Payá-Vayá, G., & Blume, H. (2019). KAVUAKA: A low power application specific hearing aid processor. In 2019 IFIP/IEEE 27Th international conference on very large scale integration (VLSI-soc). https://doi.org/10.1109/VLSI-SoC.2019.8920354 (pp. 99–104).
Griffiths, L., & Jim, C. (1982). An alternative approach to linearly constrained adaptive beamforming. IEEE Transactions on Antennas and Propagation, 30(1), 27–34. https://doi.org/10.1109/TAP.1982.1142739.
Hamacher, V., Chalupper, J., Eggers, J., Fischer, E., Kornagel, U., Puder, H., & Rass, U. (2005). Signal processing in High-End hearing aids: State of the art, challenges, and future trends. EURASIP Journal on Applied Signal Processing, 2005, 2915–2929. https://doi.org/10.1155/ASP.2005.2915.
Hamacher, V., Fischer, E., Kornagel, U., & Puder, H. (2006) In Hansler, E., & Schmidt, G. (Eds.), Applications of adaptive signal processing methods in high-end hearing aids. Topics in Acoustic Echo and Noise Control, (pp. 599–636). Berlin: Springer. https://doi.org/10.1007/3-540-33213-8_15.
Hirsch, H.G., & Ehrlicher, C. (1995). Noise estimation techniques for robust speech recognition. In 1995 International conference on acoustics, speech, and signal processing. https://doi.org/10.1109/ICASSP.1995.479387, (Vol. 1 pp. 153–156): IEEE.
Jia, C., & Xu, B. (2002). An improved entropy-based endpoint detection algorithm. In International symposium on chinese spoken language processing.
Jia, Y., Liming, C., Zenghui, Y., & Yong, H. (2014). A sub-milliwatt audio-processing platform for digital hearing aids. Journal of Semiconductors, 35(7), 075008.
Kękol, K., & Kostek, B. (2016). A study on signal processing methods applied to hearing aids. In 2016 Signal processing: algorithms, Architectures, Arrangements, and Applications (SPA). https://doi.org/10.1109/SPA.2016.7763616 (pp. 219–224): IEEE.
Kamath, S., & Loizou, P. (2002). A multi-band spectral subtraction method for enhancing speech corrupted by colored noise. In ICASSP, (Vol. 4 pp. 44164–44164): Citeseer.
Kates, J. (2008). Digital hearing aids. Plural Publishing, Incorporated.
Kates, J.M. (2005). Principles of digital dynamic-range compression. Trends in Amplification, 9 (2), 45–76. https://doi.org/10.1177/108471380500900202.
Kim, S., Cho, N., Song, S., Kim, D., Kim, K., & Yoo, H. (2006). A 0.9-V 96-μ w digital hearing aid chip with heterogeneous Σ-Δ DAC. In 2006 Symposium on VLSI circuits, 2006. Digest of technical papers (pp. 55–56).
Kim, S., Cho, N., Song, S.J., & Yoo, H.J. (2007). A 0.9 V 96 μ w fully operational digital hearing aid chip. IEEE Journal of Solid-state Circuits, 42(11), 2432–2440. https://doi.org/10.1109/JSSC.2007.907198.
Kim, S., Lee, S.J., Cho, N., Song, S.J., & Yoo, H.J. (2008). A fully integrated digital hearing aid chip with human factors considerations. IEEE Journal of Solid-State Circuits, 43(1), 266–274. https://doi.org/10.1109/ISSCC.2007.373634.
Kim, S.W., Kim, M.J., & Kim, J.S. (2019). High-performance DSP platform for digital hearing aid SoC with flexible noise estimation. IET Circuits Devices & Systems, 13(5), 717–722. https://doi.org/10.1049/iet-cds.2018.5374.
Klootsema, R., Nys, O., Vandel, E., Aebischer, D., Vaucher, P., Hautier, O., Bratschi, P., Bauduin, F., Van Oerle, G., Jakob, A., & et al. (2000). Battery supplied low power analog-digital front-end for audio applications. In Proceedings of the 26th European solid-state circuits conference (pp. 118–121): IEEE.
Ku, Y., Sohn, J., Han, J., Baek, Y., & Kim, D. (2013). A high performance hearing aid system with fully programmable ultra low power DSP. In 2013 IEEE international conference on consumer electronics (ICCE). https://doi.org/10.1109/ICCE.2013.6486925 (pp. 352–353).
Kuo, Y.T., Lin, T.J., Chang, W.H., Li, Y.T., Liu, C.W., & Young, S.T. (2008). Complexity-effective auditory compensation for digital hearing aids. In 2008 IEEE International symposium on circuits and systems. https://doi.org/10.1109/ISCAS.2008.4541707 (pp. 1472–1475): IEEE.
Lai, C.Y., Lo, Y.W., Shen, Y.L., & Chi, T.S. (2017). Plastic multi-resolution auditory model based neural network for speech enhancement. In 2017 Asia-pacific signal and information processing association annual summit and conference (APSIPA ASC). https://doi.org/10.1109/APSIPA.2017.8282096 (pp. 605–609): IEEE.
Lee, J., Kim, J., & Yoon, G. (2002). Digital envelope detector for blood pressure measurement using an oscillometric method. Journal of Medical Engineering & Technology, 26(3), 117–122. https://doi.org/10.1080/03091900210124422.
Lee, R.B. (1998). Efficiency of microSIMD architectures and index-mapped data for media processors. In Electronic imaging’99. https://doi.org/10.1117/12.334770 (pp. 34–46): International Society for Optics and Photonics.
Lee, Y.C., Chi, T.S., & Yang, C.H. (2020). A 2.17-mW Acoustic DSP Processor With CNN-FFT Accelerators for Intelligent Hearing Assistive Devices. IEEE Journal of Solid-State Circuits. https://doi.org/10.1109/JSSC.2020.2987695.
Lin, Y.J., Lee, Y.C., Liu, H.M., Chiueh, H., Chi, T.S., & Yang, C.H. (2020). A 1.5 mW Programmable Acoustic Signal Processor for Hearing Assistive Devices With Speech Intelligibility Enhancement. IEEE Transactions on Circuits and Systems I: Regular Papers. https://doi.org/10.1109/TCSI.2020.3001160.
Loizou, P. (2013). Speech enhancement: Theory and Practice, Second Edition. Taylor & Francis.
Luo, F.L., Yang, J., Pavlovic, C., & Nehorai, A. (2002). Adaptive Null-Forming scheme in digital hearing aids. IEEE Transactions on Signal Processing, 50(7), 1583–1590. https://doi.org/10.1109/TSP.2002.1011199.
Martinez, A.M.C., Gerlach, L., Payá-Vayá, G., Hermansky, H., Ooster, J., & Meyer, B.T. (2019). DNN-based performance measures for predicting error rates in automatic speech recognition and optimizing hearing aid parameters. Speech Communication, 106, 44–56. https://doi.org/10.1016/j.specom.2018.11.006.
Martinez, J.S., Bustos, S.S., Suñer, J.S., Hernández, R.R., & Schellenberg, M. (1999). A CMOS hearing aid device. Analog Integrated Circuits and Signal Processing, 21, 163–172. https://doi.org/10.1023/A:1008373824380.
May, T., Van De Par, S., & Kohlrausch, A. (2011). A probabilistic model for robust localization based on a binaural auditory front-end. IEEE Transactions on Audio, Speech, and Language Processing, 19(1), 1–13. https://doi.org/10.1109/TASL.2010.2042128.
Moller, F., Bisgaard, N., & Melanson, J. (1999). Algorithm and architecture of a 1 V low power hearing instrument DSP. In Proceedings. 1999 international symposium on low power electronics and design (cat. no. 99TH8477) (pp. 7–11): IEEE.
Mosch, P., van Oerle, G., Menzl, S., Rougnon-Glasson, N., Van Nieuwenhove, K., & Wezelenburg, M. (2000). A 660-μ w 50-Mops 1-V DSP for a Hearing Aid Chip Set. IEEE Journal of Solid-State Circuits, 35(11), 1705–1712. https://doi.org/10.1109/4.881218.
Neuteboom, H., Janssens, M., Leenen, J., Kup, B., Dijkmans, E., De Koning, B., Frowijn, V., De Bleecker, R., Van der Zwan, E., Note, S., & et al. (1997). A single battery, 0.9 V operated digital sound processing IC including AD/DA and IR receiver with 2 mW power consumption. In 1997 IEEE International solids-state circuits conference. Digest of technical papers. https://doi.org/10.1109/ISSCC.1997.585279 (pp. 98–99): IEEE.
Neuteboom, H., Kup, B.M., & Janssens, M. (1997). A DSP-based hearing instrument IC. IEEE Journal of Solid-State Circuits, 32(11), 1790–1806. https://doi.org/10.1109/4.641702.
Paker, O., Sparso, J., Haandbæk, N., Isager, M., & Nielsen, L.S. (2001). A heterogeneous multiprocessor architecture for low-power audio signal processing applications. In VLSI, 2001. Proceedings. IEEE computer society workshop on. https://doi.org/10.1109/IWV.2001.923139 (pp. 47–53): IEEE.
Park, S.R., & Lee, J. (2016). A fully convolutional neural network for speech enhancement. arXiv preprint arXiv:1609.07132.
Pu, Y., Butterfield, D., Garcia, J., Xie, J., Lin, M., Sauhta, R., Farley, R., Shellhammer, S., Derkalousdian, M., Newham, A., & et al. (2018). An Ultra-low-power 28nm CMOS Dual-die ASIC platform for smart hearables. In 2018 IEEE Biomedical circuits and systems conference (bioCAS). https://doi.org/10.1109/BIOCAS.2018.8584806 (pp. 1–4): IEEE.
Pu, Y., Shi, C., Samson, G., Park, D., Easton, K., Beraha, R., Newham, A., Lin, M., Rangan, V., Chatha, K., & et al. (2018). A 9-mm2 ultra-low-power highly integrated 28-nm CMOS SoC for Internet of Things. IEEE Journal of Solid-State Circuits, 53 (3), 936–948. https://doi.org/10.1109/JSSC.2017.2783680.
Puder, H. (2009). Hearing aids: an overview of the state-of-the-art, challenges, and future trends of an interesting audio signal processing application. In Proceedings of 6th international symposium on image and signal processing and analysis, 2009. ISPA 2009. https://doi.org/10.1109/ISPA.2009.5297793 (pp. 1–6): IEEE.
Qiao, P., Corporaal, H., & Lindwer, M. (2011). A 0.964 mW digital hearing aid system. In Design, automation & test in europe conference & exhibition (DATE). https://doi.org/10.1109/DATE.2011.5763297 (pp. 1–4): IEEE.
Rehr, R., & Gerkmann, T. (2020). SNR-Based Features and Diverse Training Data for Robust DNN-Based Speech Enhancement. arXiv preprint arXiv:2004.03512.
Ris, C., & Dupont, S. (2001). Assessing local noise level estimation methods: Application to noise robust ASR. Speech Communication, 34(1-2), 141–158. https://doi.org/10.1016/S0167-6393(00)00051-0.
Scalart, P., & et al. (1996). Speech enhancement based on a priori signal to noise estimation. In 1996 IEEE International conference on acoustics, speech, and signal processing conference proceedings. https://doi.org/10.1109/ICASSP.1996.543199, (Vol. 2 pp. 629–632): IEEE.
Seifert, C., Thiemann, J., Gerlach, L., Volkmar, T., Payá-Vayá, G., Blume, H., & van de Par, S. (2017). Real-time implementation of a GMM-based binaural localization algorithm on a VLIW-SIMD processor. https://doi.org/10.1109/ICME.2017.8019478.
Semiconductor Components Industries, LLC: Solving The hearing aid platform puzzle. Tech Rep. (2014). https://www.onsemi.com/pub/Collateral/TND6092-D.PDF.
Semiconductor Components Industries, LLC: EZAIRO 7111 HYBRID: Audio Processor for Digital Hearing Aids (2018). https://www.onsemi.com/pub/Collateral/E7111-D.PDF.
Semiconductor Components Industries, LLC: Wireless-Enabled Audio Processor for Hearing Aids (2018). https://www.onsemi.com/pub/Collateral/E7150-D.PDF.
Serra-Graells, F., Gomez, L., & Huertas, J.L. (2004). A True-1-V 300-μ w CMOS-subthreshold Log-Domain Hearing-Aid-On-Chip. IEEE Journal of Solid-State Circuits, 39(8), 1271–1281. https://doi.org/10.1109/JSSC.2004.831469.
Shaer, L., Nahlus, I., Merhi, J., Kayssi, A., & Chehab, A. (2013). Low-power digital signal processor design for a hearing aid. In 2013 4Th annual international conference on energy aware computing systems and applications (ICEAC). https://doi.org/10.1109/ICEAC.2013.6737634 (pp. 40–44): IEEE.
Sjursen, W.P. Hearing aid digital filter (2001). US Patent 6,292,571.
Tammen, M., Fischer, D., Meyer, B.T., & Doclo, S. (2020). DNN-based speech presence probability estimation for multi-frame single-microphone speech enhancement. In ICASSP 2020-2020 IEEE International conference on acoustics, speech and signal processing (ICASSP). https://doi.org/10.1109/ICASSP40776.2020.9054196 (pp. 191–195): IEEE.
Teutsch, H., & Elko, G.W. (2001). First-and second-order adaptive differential microphone arrays. In Proc. IWAENC, vol. 1. Citeseer.
Varzandeh, R., Adiloğlu, K., Doclo, S., & Hohmann, V. (2020). Exploiting periodicity features for joint detection and DOA estimation of speech sources using convolutional neural networks. In ICASSP 2020-2020 IEEE International conference on acoustics, speech and signal processing (ICASSP). https://doi.org/10.1109/ICASSP40776.2020.9054754 (pp. 566–570): IEEE.
Wang, X., Yang, H., Li, F., Yin, T., Huang, G., & Liu, F. (2014). A programmable analog hearing aid system-on-chip with frequency compensation. Analog Integrated Circuits and Signal Processing, 79 (2), 227–236. https://doi.org/10.1007/s10470-014-0264-6.
Wei, C.W., Kuo, Y.T., Chang, K.C., Tsai, C.C., Lin, J.Y., FanJiang, Y., Tu, M.H., Liu, C.W., Chang, T.S., & Jou, S.J. (2010). A low-power Mandarin-specific hearing aid chip. In 2010 IEEE Asian solid-state circuits conference. https://doi.org/10.1109/ASSCC.2010.5716623 (pp. 1–4): IEEE.
Wei, C.W., Tsai, C.C., FanJiang, Y., Chang, T.S., & Jou, S.J. (2014). Analysis and implementation of low-power perceptual multiband noise reduction for the hearing aids application. IET Circuits Devices & Systems, 8(6), 516–525. https://doi.org/10.1049/iet-cds.2013.0326.
Westerlund, N., Dahl, M., & Claesson, I. (2005). Speech enhancement for personal communication using an adaptive gain equalizer. Signal Processing, 85(6), 1089–1101. https://doi.org/10.1016/j.sigpro.2005.01.004.
Yoo, J., Kim, S., Cho, N., Song, S.J., & Yoo, H.J. (2006). A 10-μ w digital signal processor with adaptive-SNR monitoring for a sub-1 V digital hearing aid. In IEEE International symposium on circuits and systems.
Acknowledgements
This work was funded by the German Research Foundation (Deutsche Forschungsgemeinschaft) under Germany’s Excellence Strategy – EXC 2177/1 – Project ID 390895286.
Funding
Open Access funding enabled and organized by Projekt DEAL.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This article belongs to the Topical Collection: Survey Papers
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Gerlach, L., Payá-Vayá, G. & Blume, H. A Survey on Application Specific Processor Architectures for Digital Hearing Aids. J Sign Process Syst 94, 1293–1308 (2022). https://doi.org/10.1007/s11265-021-01648-0
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11265-021-01648-0