Sensor arrays have been useful tools for radar-related applications, radio astronomy and other applications including indoor localization systems or environmental monitoring. Over the past decades, many systems composed of microphone arrays have been developed and evaluated for sound source location, separation and amplification in difficult acoustic environments. Large microphone arrays like [1
] have been built in the early 2000s to evaluate different algorithms for speech enhancement targeting conference rooms. Small microphone arrays are nowadays ubiquitous in many consumer devices such as laptops, webcams or smartphones as an aid for speech recognition [4
] due to the improvement of the recognition when compared to a single omnidirectional microphone [5
]. Multimodal computing applications such as person tracking systems, hearing aid systems or robot-related applications benefit from the use of microphone arrays. The computational demand of such applications is not always satisfied when targeting real-time or large microphone arrays.
Field-Programmable Gate Array (FPGA) is a semiconductor device providing thousands of arrays of programmable logic blocks and specific-operation blocks. FPGAs are reprogrammed to enable the hardware description of the desired application or functionality requirements. Customized designs built with this technology achieve low latency, performance acceleration and a high power efficiency, which made them very suitable as hardware accelerators for applications using a microphone array. FPGAs are well-suited for high-speed data acquisition, parallel processing of sample data streams and to accelerate audio streaming applications using microphone arrays to achieve real-time. Moreover, the multiple I/Os that FPGAs offer, facilitates the interface of microphone arrays with a relatively large number of microphones. As a result, FPGAs have displaced Digital Signal Processors (DSPs) for acoustic applications involving microphone arrays in the recent past.
FPGA technology has disruptive characteristics which are changing the way microphone arrays are used for certain acoustic applications. The latest FPGA-based architectures are capable to fully embed all the computational tasks demanded by highly constraint acoustic imaging applications. Nevertheless, many FPGA-based architectures must solve similar challenges when interfacing and processing multiple data streams from large microphone arrays. Moreover, different FPGA-based architectures have been proposed for sound source localization. Here, a review of how FPGAs are used for applications based on microphone arrays is presented. This survey intends to present the most relevant uses of FPGAs combined with microphone arrays. The challenges, trends and potential use that FPGA’s technology has for acoustic-related microphone arrays are also presented and discussed.
An introduction of microphone arrays, detailing the most common types of microphones and explaining the type of processing done with microphone arrays is presented in Section 2
, followed by a brief explanation of the FPGA technology in Section 3
. A categorization of the state-of-the art based on the role of the FPGA when processing the signal data coming from the microphone arrays is proposed in Section 4
. Such an overview provides a perspective on how FPGAs have been adopted for the different types of applications involving microphone arrays. An analysis of advanced FPGA-based architectures of acoustic beamformers is presented in Section 5
. This analysis focused on how existing beamforming techniques are implemented on FPGAs, the level of integration on the FPGA and how the selection of the microphone affects the architecture. Moreover, the reasons of the incremental adoption of FPGAs is further discussed in Section 6
, providing an overview of the recent trends. The current challenges and research opportunities are discussed in Section 7
. Finally, the conclusions are drawn in Section 8
2. Microphone Arrays
Microphone arrays have advanced together with recent developments in microphone technology. Large microphone arrays like [2
], composed of hundreds of Electret-Condenser microphones (ECMs), have evolved to compact microphone arrays composed of Micro-Electro-Mechanical Systems (MEMS) microphones to be integrated in smartphones, tablets or voice assistants, such as Amazon’s Alexa [6
]. Microphone arrays have been used in hearing aids [7
], for echo cancellation [8
], in ambient assisted living [9
], in automotive [10
], for biometrical systems [12
], for acoustic environmental monitoring [14
], for detection and tracking of Unmanned Aerial Vehicles (UAVs) [15
], or for using UAVs for rescue activities [17
], for speech enhancement [5
] and in many other applications [19
]. The miniaturization of the packaging while keeping the quality of the microphones’ response has lead to compact microphone arrays and created opportunities for new acoustic applications.
2.1. Type of Microphones
The most popular type of microphones to compose microphone arrays is briefly described here. Microphones can be grouped based on their transducer principle, that is, how the microphone converts the acoustic vibration into an electrical signal. There are several types of transducers such as condenser or capacitor microphones, dynamic microphones, piezoelectric microphones,... Over all the variety of the available microphones, two main categories have predominated when building microphone arrays: ECMs and MEMS microphones (Figure 1
ECMs are a type of condenser microphone composed of conductive electrode members on different plates, one of which is a moveable diaphragm. One of the plates is a stable dielectric material called electret with a permanent electric charge, which eliminates the need for a polarizing power supply. ECMs only require certain power supply to power an integrated preamplifier. The capacitance of the parallel plate capacitor changes when the distance between the two plates varies when the sound wave strikes the surface of the moveable diaphragm. ECMs have a whole acoustic frequency response and a low distortion in the signal transmission since the capacitance effect varies due to an electromechanical mechanism. The relatively small package of ECMs made them the preferred choice to build the first large microphone arrays like in [1
]. The output format of ECMs composing microphone arrays has been traditionally analog. The output impedance rounds from a few hundred to several thousand ohms [20
], which must be considered when selecting the codec. This impedance is determined in case of the ECMS by the value of the load resistance with a corresponding change in sensitivity [21
2.1.2. MEMS Microphones
A MEMS microphone is a miniature microphone, usually in the form of a surface mount device, that uses a miniature pressure-sensitive diaphragma to sense sound waves. Similarly to ECMs, the variations of the diaphragma directly determine the capacitance. This diaphragm is produced by surface micromachining of polysilicon on a silicon substrate or etched on a semiconductor using standard Complementary Metal Oxide Semiconductor (CMOS) processes [22
]. MEMS microphones include significant amounts of integrated circuits for signal conditioning and other functions within the same package of the sensor since it shares the same fabrication technologies used to make integrated circuits.
MEMS microphones are categorized based on the type of output, which can be analog or digital [23
]. Analog MEMS microphones present an output impedance of a few hundred ohms and an offset DC voltage between ground and the supply voltage. Despite this offset avoids the clipping of the peaks of the highest amplitude output signals, it also leads to a high-pass filter effect which might attenuate low frequencies of interest. Regarding the high impedance, a possible solution to avoid attenuations at the output side is the use of programmable gain amplifiers at the codec side.
Because MEMS microphones are produced on a silicon substrate, a clear benefit of digital MEMS microphones is the easy integration of the transducer element together with an amplifier and an Analog-to-Digital Converter (ADC) in the same die or in the package of the microphone. As a result, digital MEMS microphones drastically reduce all the required circuitry to interface the digital signal processor unit. Due to integrating all the ADC circuitry into the microphone’s package, digital MEMS microphones provide advantages in the design phase. Each new design iteration requires adaptations in the signal conditioning circuitry when using analog microphones.
The encoded output format of digital MEMS microphones are Pulse Density Modulation (PDM) or Inter-IC Sound () output interface. PDM represents an oversampled 1-bit audio signal and brings low noise. The data from two microphones share the same data line at different shared clock edge, guaranteeing their synchronization. Same principle can be applied to microphone arrays, where multiple digital MEMS microphones can be synchronized by using the same clock. The synchronization of the microphones is crucial in microphone arrays, which might determine what type of MEMS microphone to use since arrays composed of analog MEMS microphones must be synchronized at the ADC. MEMS microphones present the same properties as the PDM MEMS microphones, but integrate in the silicon all the circuitry required for the PDM demodulation and multi-bit Pulse Code Modulation (PCM) conversion. Thus, MEMS microphones output is filtered and decimated at baseband audio sample rate.
The selection of the type of microphones when building a microphone array is determined by the different features that each microphone’s technology provides. The type of microphones and the output data format determine the overall output response and the digital signal processing requirements.
Despite ECMs and MEMS microphones operate as condenser microphones, MEMS microphones benefit from the enormous advances made in silicon technology over the past decades and present several advantages [24
] that make them more suitable for many acoustic applications.
MEMS microphones have less sensitivity to temperature variations than ECMs.
MEMS microphones’ footprint is around 10 times smaller than ECMs.
MEMS microphones have a lower sensitivity to vibrations or mechanical shocks than ECMs.
ECMS have a higher device-to-device variation in their frequency response than MEMS microphones.
ECMs need a specific soldering process and are unable to be undertaken re-flow soldering, while MEMS can.
MEMS microphones have a better power supply rejection compared to ECMs, facilitating the reduction of the components’ count of the audio circuit design.
The advantages of the MEMS technology explains why MEMS microphones have slowly replaced ECMs as the default choice for microphones arrays since their introduction by Knowles Acoustics in 2006.
The output format is another relevant factor to be considered since it directly affects to the requirements of the digital signal processing system. Analog microphones demand certain considerations when selecting the codec due to the high impedance and the voltage offset at the microphone’s output. Codecs, such as digital pre-amplifiers [26
], convert the analog signals from analog microphones, in particular ECMs. This pre-amplifiers drives an over-sampled sigma delta ADC to PDM output data. This type of integrated circuits facilitates the interface of analog microphones with digital processing systems by providing a compatible digital data format like PDM or
audio bus [27
]. The use of digital MEMS microphones, however, reduces the complexity of the hardware since they do not require external amplifiers. This fact makes digital MEMS microphones immune to Radio Frequency (RF) noise and less sensitive to electromagnetic interference compared to analog versions [28
At the digital signal processing system side, the PDM data format produced at a high-sample rate needs to be demodulated to an analogue form before being heard as audio or converted to PCM format if it needs to be digitally analysed [29
]. The operations required to demodulate the oversampled PDM signals consists of a multi-filter stage for the PDM demodulation and PCM conversion [30
]. The integration of the PDM demodulation in the silicon reduces the
MEMS microphones’ flexibility since they present a fixed demodulator architecture [31
]. The PDM demodulation circuitry integrated on the chip is a fixed decimator by a factor of 64 followed by a low-pass filter to remove the remaining high frequency components in the signals. The microphone operates as an
slave, transferring the PCM data word length of 24 bits in 2’s complement, as depicted in Figure 2
. Due to the fixed decimation factor, the digital signal processing system must wait several clock cycles before to receive the PCM signal from each microphone. This solution might satisfy some acoustic applications requirements, but it certainly reduces the opportunities of exploring alternative demodulation architectures based on the target application demands. For instance, different design strategies related to the architecture of the PDM demodulation are proposed in [32
] to accelerate a particular type of acoustic application.
2.2. Microphone Array Processing
Microphone arrays exploit the processing of signals captured by multiple spatially-separated microphones. The distance between microphones results in a difference in the path length between the sound sources and the microphones. This difference results in a constructive interference when the path length is equal for both microphones, obtaining an amplification of the signal by a factor of the number of microphones. The difference is dependent on the angle of incidence of the acoustic wave and the distance between the microphones. Therefore, microphone arrays are able to reinforce the sound coming from a particular direction while attenuating the sound arriving from different directions. The microphone arrays’ frequency response depends on [33
the number of microphones
the spacing between microphones
the sound source spectral frequency
the angle of incidence
A high number of microphones improves the frequency response by increasing the Signal-to-Noise Ration (SNR) [34
] and by spatially filtering more precisely the sound field. Regarding the microphone’s spacing, a large distance between microphones improves the array’s response for lower frequencies, a short spacing prevents spatial aliasing [35
]. The array geometry, referring to the position of the microphones in the array, is a wide research field [36
] because the geometry aims to enhance acoustic signals and separate them from noise based on the acoustic application [37
]. Figure 3
depicts some examples of array geometries.
The angle of incidence can be modified, performing a spatially filtering of the sound field, by adapting the path lengths of the input data of the microphones. The concept of steering the microphone’s response in a desired direction is called beamforming [38
]. The beamforming methods can be applied in the time domain or in the frequency domain. Time-domain beamformers apply different time delays to each microphone to compensate for the path length differences from the sound source to the microphone arrays. The basic time-domain beamfomer is the well-known Delay-and-Sum. The time delays can be also integrated in Finite Impulse Response (FIR) filters, like one per microphone, performing the Filter-and-Sum beamformer. Both beamformers can be also applied in the frequency domain. In that case, the signal received from each microphone is separated into narrow-band frequency bins through discrete Fourier transformation, before applying phase shift corrections to compensate the difference in path lengths. The beamforming operations present a high-level of parallelism and demand a very low latency when targeting real-time applications. Both are well-known features that FPGA present nowadays.
3. FPGA Technology
FPGAs are semiconductor devices composed of logic blocks interconnected via programmable connections. The fundamental units of an FPGA are Configurable Logic Blocks (CLBs) consisting of Look-Up Tables (LUTs) constructed over simple memories, SRAM or Flash, that store Boolean functions. Each LUT has a fixed number of inputs and is coupled with a multiplexer and a Flip-Flop (FF) register in order to support sequential circuits. Likewise, several CLBs can be combined for implementing complex functions by configuring the connection switches in the programmable routing fabric. The flexibility of FPGAs enables the possibility of embedding application specific architectures, which can be tuned to target performance, power efficiency or low latency.
depicts the FPGA’s design flow. FPGAs are programmed with Hardware Description Languages (HDL), which describe the desired functionality to be mapped onto the reconfigurable hardware. The hardware description elaborated by the designer is used by the vendor’s synthesizer in order to find an optimized arrangement of the FPGA’s resources implementing the described functionality. During the synthesis, the design is translated to Register Transfer Logic (RTL) netlists. This feature distinguishes FPGAs from Application-Specific Integrated Circuits (ASICs), which are custom manufactured for specific design tasks. The application’s functionality can also be described though digital logic operators represented as schematics diagrams. The netlists generated at the synthesis stage are used during the implementation stage to perform several steps: translation, mapping, place and routing. The translation merges the incoming netlists and constraints into the vendors’ technology of the target FPGA. The mapping fits the design into the available resources, such as CLBs and I/Os. The place and route step places the design and routes the components to satisfy the timing constraints. Finally, a bitstream, which is used to program the FPGA with the design, is generated and downloaded to the device.
FPGA’s design flow demands the design verification at the implementation stage, which is done through logic or timing simulations. Moreover, the synthesis and implementation stages require many compute-intensive operations, demanding minutes to hours to be completed based on the used amount of FPGA’s resources. As a result, the overall design flow becomes a high-time and effort demanding task. In the recent past, several High-Level Synthesis (HLS) tools have been developed to alleviate the hardware description by using high-level descriptive languages, such as C/C++ [40
] or OpenCL [42
]. This high-level approach allows the increment of the reusability of the hardware descriptive code and facilitates the debugging and verification process.
FPGA’s resources have increased in the latest years following the improvements in the RTL technology. This increment in the available resources enables the embedding of general-purpose soft-core processors using the reconfigurable logic. Most of these customizable processors are 32-bits processors with a Reduced Instruction Set Computer (RISC) architecture. Existing open source soft-core processors, such as the OpenRISC [43
], and especially the recent RISC-V [44
] architecture, have been proposed in recent years as alternative to the Xilinx’s Micro/PicoBlaze [45
] or the Intel/Altera’s Nios-II [46
] soft-core processors. The use of these soft-core processors are extended in control-related applications or for the management of communication processes. The designer’s effort to program such general-purpose processors is reduced due to the use of high-level languages. Although this type of embedded processors allows a fine tune customization at instruction levels and can be easily modified, their performance is not very high as they operate in a range from 50 MHz until 250 MHz. In recent years, there has been a move towards System-on-Chip (SoC) and FPGAs have been combined with hard-core processors, which are processors implemented with a fixed architecture in the silicon. Hard-core processors together with FPGA fabric provide a larger interconnection bandwidth between both technologies and achieve faster processing speed since they are not limited by the reconfigurable logic speed. Figure 5
depicts a Xilinx Zynq SoC FPGA serie [47
], composed of a Processing System (PS), which is a dual-core ARM Cortex-A, and a Programmable Logic (PL) based on Artx-7 or Kintex7 FPGA fabric. Such SoC FPGAs demand, however, a hardware/software co-design to be fully exploited.
The high level of parallelism achievable on FPGAs well-suits not only for multiple customized data path processes, such as audio signal demodulation (Table 1
) but also to perform complex audio beamforming operations (Section 4.2.2
, Section 4.2.3
and Section 4.3
). The embedding of such computational-demanding signal processing operations on FPGAs has only been possible in the recent past due to several factors. Figure 10
depicts the categorization of the presented related work. The evolution of the number of microphones in the array over the last years reflects some interesting facts of how FPGAs are used. Notice, however, that some designs like [68
] are not included since their FPGA-based system is composed of multiple FPGAs. In the early 2000s, the first uses of FPGAs to compute microphone arrays signals were to mainly embed simple applications for sound-source location applying TDOA [55
]. Such applications require a minimum number of microphones since the traditional GCC used for TDOA grows exponential with the number of microphones. FPGAs started to be seriously considered in the following decade, being involved in a broader type of applications such as in [72
]. Several factors might justify the increasing adoption of the FPGAs’ technology:
Cheaper, smaller and fully integrated microphones, like digital MEMS microphones [102
], facilitate the construction of larger arrays, increasing the computational demands beyond of what microprocessors or DSPs can deliver.
FPGAs have also benefited from the Moore’s law [103
], and due to a higher transistor integration in the same die, FPGAs offer larger reconfigurable resources.
Advances in the FPGAs’ design tool chain, like the HLS tools [40
], have reduced the overall effort to develop and to accelerate new and existing applications on FPGAs.
Cheaper and smaller microphones facilitate the construction of larger arrays. The MEMS technology to build microphones has been available since the early 2000s [22
], but only introduced in commercial devices in 2006 after Apple introduced a 3-microphone array in their iPhone 6. The replacement of ECMs by MEMS microphones to build microphone arrays only started around 2010, when Knowles Acoustics lost the monopoly of the MEMS microphones commercialization [105
]. As a result, several acoustic applications, mostly related to sound source location, have been implemented on FPGA (Figure 10
In the early 2000s, FPGAs were only considered for audio signal demodulation of the incoming data stream from microphone arrays composed of tens to hundreds of microphones [3
] or as sound source locators for 2-microphone arrays [55
]. Over the last years, applications demanding a relatively large number of microphones have also used FPGAs to embed the most computational demanding operations. The trend is to embed more complex applications on the FPGA. Despite some FPGA-based architectures still allocate on the FPGA’s resources the audio demodulation operations, over the last years FPGAs are no longer used exclusively for audio demodulation but also to embed complex applications. Current FPGAs provide a larger amount of resources, including DSPs and internal blocks of memory (BRAM), allowing the implementation of more complex architectures targeting real-time signal processing applications. For instance, SoC FPGAs such as Xilinx Virtex-II FPGAs used in [55
] have been replaced by larger SoC FPGA such as Xilinx Zynq SoC FPGA-based board for acoustic imaging, as in [93
New HLS tools such as Xilinx Vivado HLS [40
], LabVIEW from National Instruments [94
], Xilinx System Generator [104
] or HDL Coder for Matlab/Simulink [106
], reduce significantly the overall development cost. For instance, Xilinx System Generator [104
] incorporates several libraries in Matlab/Simulink tool which allows a high-level prototyping of FPGA designs. HLS tools have been helpful to develop complex designs, as in [66
]. The reduction of the overall effort to develop complex designs on FPGAs is one of the advantages of such tools. Moreover, the distribution of the computational tasks in heterogeneous systems, such as SoC FPGAs or including GPUs, as in [93
], is simplified due to design at a higher level.
Although the spatial resolution increases with a large number of microphones per array [107
], the additional benefit of incrementing the number of microphones decreases when considering the increment of the computational demand. It comes from the fact that the added value of increasing the number of microphones starts to decrease after certain amount. For instance, the number of microphones per FPGA not only did not increase over the last two years, but even decreased. Moreover, the integration of a large number of microphones in a planar array becomes extremely challenging without increasing the microphone spacing, leading to microphone arrays of several meters long [87
The decrement of the number of microphones per array also occurs for applications related to acoustic imaging. Acoustic cameras are extremely performance demanding, specially when targeting real-time. The fact is that the computation in parallel of tens to hundreds of incoming signals from microphones can simply consume the FPGA’s available resources. Moreover, acoustic cameras need to steer to hundreds of thousands orientations in order to provide acceptable image resolutions [85
]. Therefore, the trend of FPGA-based acoustic cameras is to converge to a balance between the FPGA’s resource consumption, the target performance and the desired acoustic image resolution.
A trend of FPGA-based architectures for microphone arrays is to embed more complex acoustic applications while reducing the number of microphones of the array. FPGAs are no longer only considered for audio signal demodulation but also as a platform on which computational demanding acoustic applications such as acoustic imaging applications can be embedded. Constraints, such as a real-time response or power efficiency, become more relevant when targeting new acoustic applications like acoustic imaging applications or WSNs-based applications. Modern FPGAs not only provide a higher number of resources where to embed complex applications, but also integrate CPUs and even GPUs in the same die [108
] or become extremely power efficient when considering the Flash-based FPGAs [109
The FPGA technology, however, is far of being exploited. Nowadays FPGAs present interesting features which have not been already explored such as dynamic partial reconfiguration. Therefore, it is expected that the incoming FPGA-based acoustic applications not only fully embed their operations on the FPGAs but also exploit some of the unique features that this technology offers.
7. Challenges and Research Opportunities
The current state-of-the-art of FPGA-based acoustic applications have been summarized in Table 1
, Table 2
, Table 3
, Table 4
and Table 5
. Although the characteristics of these FPGA-based designs have been discussed in the previous sections, important features, such as achievable performance or the power efficiency have not been analyzed. Their relevance for the nowadays acoustic applications is, however, critical when choosing technology.
FPGA’s technology offers unique features which could satisfy the most performance demanding acoustic applications. The low latency usually required by acoustic applications such as speech enhancement is achievable on FPGAs when fully embedding the signal processing operations. Most of the current FPGA-based designs still perform the computational-hungry operations on general purpose processors, demanding high-bandwidth I/O connection in order to satisfy the low latency required for real-time applications. Besides such constraints, only a few designs fully embed all computations on the FPGA [34
]. Furthermore, several architectures, such as those found in [62
], have already considered SoC FPGAs to distribute the tasks between the different technologies. These heterogeneous platforms present new opportunities to combine acoustic applications with other types of applications. For instance, the use of different types of sensors (e.g. Infrared cameras) can be combined with acoustic microphone arrays for smart surveillance. Acoustic cameras already combine traditional RGB cameras by overlapping images. Many applications could exploit such combination by using SoC FPGAs to process in real-time the sensing information from each device.
Despite real-life environments present dynamic acoustic responses, most of the FPGA-based architectures cannot adapt their response to different acoustic contexts. Despite certain solutions like [66
] consider adaptive beamforming techniques, the overall response of the system varies in a short range. Many applications need to change their behaviors based on the acoustic context [110
], such as applications targeting specific sound sources [111
], where a simple adaptation of the filters is not enough and a different feature extraction is needed. Adjustments on the number of active microphones, the acquisition time or the target sound source demand the implementation of complex context-switch controllers. FPGAs provide a unique feature which allows to partially reconfigure parts of the embedded functionality in runtime. FPGA’s dynamic partial reconfiguration [112
] provides the context-switch capability which is not present in other technologies. An example of the potential benefit of using partial reconfiguration is shown in [76
], where the proposed architecture uses a low-level reconfiguration to dynamically adjust the angular resolution of their sound locator. For instance, the authors in [114
] present a SoC FPGA implementation of a Frost’s beamformer. Despite the authors do not target microphone arrays, the architecture seems to be compatible and the principles of their approach are applicable for different types of sensor arrays. Their architecture presents two interesting approaches. Firstly, the distribution of the computations between the ARM Cortex-A9 hard-core Processor System (PS) and the Programmable Logic (PL), and secondly, the use of partial reconfiguration to adjust the Frost’s beamformer.
The dynamism required by advanced acoustic applications also leads to power efficiency. FPGAs are well-known by their power efficiency, offering a higher Operations-per-Watt than general purpose processors [115
] and different hardware accelerators, such as GPUs [116
]. WSN-related applications are very sensitive to power efficiency since the network is often built on battery-based nodes. The use of microphone arrays as sensing nodes of WSN applications demand power efficient solutions. FPGA-based solutions have already shown how the power efficiency of microphone arrays can be increased. For instance, the power consumption per microphone has decreased from 400 mW using DSP in [2
] to only 77 mW per microphone in [55
], and more recently, to only 27.14 mW for the overall system in [72
One can conclude that FPGA’s characteristics have not been fully exploited and many acoustic-related applications can benefit from this technology. Recent heterogeneous FPGA SoCs provide enough resources to not only embed acoustic applications but also extend functionalities by combining with different types of applications while satisfying the performance and the power efficiency demands. Acoustic applications involving machine learning, such as acoustic scene recognition [118
] or learning situations in home environments [120
], can directly benefit from FPGA’s features. Further than FPGA SoCs, FPGAs standalone offer a unique feature, such as partial reconfiguration, which can certainly provide the flexibility that many multi-modal applications such as human-robot iteration [121
] or multimodal acoustic imaging demand [122
]. FPGAs still have much to offer to acoustic applications using microphone arrays.