You are currently viewing a new version of our website. To view the old version click .
Electronics
  • Feature Paper
  • Article
  • Open Access

17 July 2020

Design of a Low-Cost Configurable Acoustic Sensor for the Rapid Development of Sound Recognition Applications

,
,
and
GTM—Grup de recerca en Tecnologies Mèdia, La Salle—Universitat Ramon Llull, c/Quatre Camins, 30, 08022 Barcelona, Spain
*
Author to whom correspondence should be addressed.
This article belongs to the Section Circuit and Signal Processing

Abstract

Concerned about the noise pollution in urban environments, the European Commission (EC) has created an Environmental Noise Directive 2002/49/EC (END) requiring Member states to publish noise maps and noise management plans every five years for cities with a high density of inhabitants, major roads, railways and airports. The END also requires the noise pressure levels for these sources to be presented independently. Currently, data measurements and the representations of the noise pressure levels in such maps are performed semi-manually by experts. This process is time and cost consuming, as well as limited to presenting only a static picture of the noise levels. To overcome these issues, we propose the deployment of Wireless Acoustic Sensor Networks with several nodes in urban environments that can enable the generation of real-time noise level maps, as well as detect the source of the sound thanks to machine learning algorithms. In this paper, we briefly review the state of the art of the hardware used in wireless acoustic applications and propose a low-cost sensor based on an ARM cortex-A microprocessor. This node is able to process machine learning algorithms for sound source detection in-situ, allowing the deployment of highly scalable sound identification systems.

1. Introduction

The number of people living in urban areas has been greater than in rural areas since 2010; with around 50.5% of the world’s population residing in towns or cities []. Moreover, according to the United Nations (UN), this tendency is expected to increase in the next four decades [,]. In order to cope with this growth, the authorities of major cities envision an evolution towards smart cities or smart regions [,], with a focus on improving the quality of life of urban inhabitants. Doing so requires significant changes in governance, decision-making and the development of action plans [].
The smart city concept encompass several quality of life and health indicators; among them, the availability of digital services and the acoustic pollution []. The later is justified by large-scale studies in Europe that revealed severe adverse effects on the health and the life expectancy of the inhabitants of acoustically polluted environments [,]. In order to address these issues, the European Commission have created the Environmental Noise Directive 2002/49/EC (END) [], in force since December 2018, and the Common Noise Assessment Methods in Europe (CNOSSOS-EU) []. They define, respectively, the obligations of Member States regarding the management of urban noise and common methods that they are expected to follow. In particular, the END requires Member States to publish noise maps and noise management plans every five years for agglomeration of more than 100,000 inhabitants and major roads, railways and airports. The directive, supported by the last reports from the World Health Organization (WHO) [], also determines that separate strategic maps are created for road-traffic [,,], rail-traffic [,], aircraft and industrial noise [,], showing the acoustic pressure levels of these noise sources alone. Currently, such maps are created using standard semi-manual, expert-based procedures, which is not only time and cost consuming, but also primarily provide a static picture of noise levels.
The emergence of Wireless Acoustic Sensor Networks (WASNs) represents a paradigm shift in the way the END regulatory requirements can be addressed, by enabling the generation of real-time dynamic noise maps. The difficulty with automatic map updating is that most acoustic sensors in WASNs can only measure the A-weighted equivalent noise level L A e q , without determining the source of the sound. Therefore, the resulting map may represent a mix of the noise sources (traffic, airplanes, people, construction works, etc.) present in the acoustic environment, not fulfilling the requirements of the END [], which requires a distinction of the noise source. Moreover, as shown by several projects developed in Europe and in America over the last years, authorities are starting to ask for the distinction of noise sources, so that they can design more effective policies []. For example, in the SENSEable PISA project [,], authorities are not only interested in the acoustic pressure levels of noise, but also in distinguishing their sources to determine the relationships between certain types of noise and their health effects. The DYNAMAP project [,] also requires the distinction between road traffic noise and any other kinds of noise (named Anomalous Noise Events—ANE) in order to build real-time traffic noise maps. Finally, the Sounds of New York Project (SONYC) have also gathered huge amounts of acoustic data from the city of New York to classify the ten most frequent types of noise across the city [].
Nevertheless, most WASNs deployed in smart cities are being under-exploited. They are usually unable to extract all the information that urban sounds carry because their sensors cannot address more computations than the L A e q value calculation with a variable time span. To address this limitation, we have developed SmartSound, an acoustic event detection algorithm that can be trained, tested and deployed rapidly in a low-cost acoustic sensor designed for that purpose. SmartSound can be embedded on acoustic sensors to create truly Smart WASNs that can separate and accurately measure the different types of sounds in a city. These networks can be deployed permanently across the city or even temporarily in places of interest, creating unprecedented value to city administrators and businesses. The former can take advantage of dynamic noise maps to manage the city and to provide citizen with precise, up-to-date information on environmental noise. The later can create innovative services and products. For example, a high-end real estate agency may want to offer, as an added value to its customers, a characterization of the sound landscape around the properties in its catalogue. The first prototype deployed of SmartSound technology was applied successfully to the two pilot areas of the DYNAMAP project [], the urban area of Milan [] and the suburban area of Rome []. In that situation, the SmartSound technology was adapted to apply a binary classification between the sounds belonging to Road Traffic Noise (RTN) and all the other sound sources, called Anomalous Noise Events (ANE) [], but SmartSound can also support multiclass classifiers [].
This paper describes the design of a configurable low-cost acoustic sensor to enable the creation of Smart WASNs and the rapid development of sound identification solutions that take advantage of these networks. Most of the previously described initiatives mainly evaluate the L A e q , and some of them also include the detection of a certain type of events. Our goal is to widen this scope and improve the design stage of any WASN with any final noise source detection requirement. For this purpose, and in order to identify the sources of sound, the sensor proposal supports sound capture and real-time processing of acoustic data, WIFI and 3G connectivity, as well as integration with Sentilo (an Internet of Things -IoT- platform) and SmartSense (an internal R&D platform for the development of rapid prototypes of the SmartSound technology). The later allows on-the-fly configuration of the sensor’s sampling frequency and data precision, of processing algorithms, and the choice to whether to receive samples of audio files with variable length.
This paper is structured as follows: Section 2 briefly describes the context of this development and the state of the art of acoustic urban sensing projects. Section 3 explains the proposed architecture for a Smart WASN and lists the system requirements. Section 4 describes the elements that make up the sensor. Finally, Section 5 discusses the implications of our design and concludes the paper.

3. System Description

In this section, we define the architecture for a smart WASN; that is, a WASN capable of identifying the source of noise events and, consequently, providing an more accurate measurement of the different noise types. In order to make the WASN scalable, we designed it as a distributed intelligent system in which smart acoustic sensors (i) capture the audio, (ii) process the audio frames to obtain the label of the noise event using a machine learning algorithms, and (iii) send this information to a remote server. The transmitted data are the acoustic pressure level ( L e q ), the label of the acoustic event, and its timestamp. Finally, the output of this processing is plotted with Sentilo, an IoT platform that we use for representing the audio information on the different locations.

3.1. Description of the WASN Architecture

The aforementioned system is made up of two elements: (i) The wireless acoustic sensors and (ii) the remote server, shown in Figure 1. The sensors run the SmartSound technology, which is a set of data intensive algorithms for processing audio frames and for classifying noise events (see Section 3.2); while the remote server stores the L e q and output of these processing algorithms, as well as allows the remote configuration of the sensors through the SmartSense platform, which is a research tool for the rapid development of sensor-based solutions (see Section 3.3).
Figure 1. Elements of the Wireless Acoustic Sensor Network (WASN) architecture to capture the audio, process it and send it to a remote server for monitoring the soundscape of an urban area.
Distributing the intelligence for noise source identification among sensors, allows us to deploy a WASN of any number of nodes. Moreover, it lowers the requirements on the wireless network and enhances privacy because this system only sends the label and the L e q of the recorded audio frame every second, instead of sending the acoustic RAW data at 48 ksps. This architecture, therefore, requires the smart acoustic sensors to have sufficient computing capability to process all this information. In order to evaluate the trade-off between performance and cost to design such a sensor, we carried out a comparison of the computing platforms in embedded systems. This comparison is detailed in Section 4.

3.2. The SmartSound Technology

The SmartSound technology is a sound recognition system that listens to an audio stream and uses signal processing and machine learning algorithms to identify, for each acoustic event, its type and absolute measurements. Figure 2 represents how SmartSound can be embedded in sensors at strategic points within the cities and send information to servers for various purposes.
Figure 2. SmartSound performing on a Smart WASN.
The technology is based on supervised learning, which consists of two main processes, as shown in Figure 3: (i) Signal feature extraction, and (ii) noise event identification. The signal feature extraction process obtains a features set representing the acoustic characteristics of the noise signal. Subsequently, a feature vector is computed upon each frame, thus obtaining a compact representation of the signal. Then a supervised learning system is trained with multiple samples of noise events recorded in the real environment. As a result of this training process, the system is capable of distinguishing between different types of Anonymous Noise Events (ANE), thus being able to label new incoming noise samples as belonging to different noise sources. Examples of ANEs are the sound of people, car, motorcycle, bell, etc.
Figure 3. SmartSound on a WASN node.

3.3. The SmartSense Platform

The SmartSense platform is a research and development tool used by the Research Group on Media Technology (GTM) of La Salle, University Ramon Llull. The platform allows the rapid development of proofs-of-concept of the SmartSound technology. It enables the GTM researchers to develop the core of distinct applications by combining pre-programmed signal processing and machine learning algorithms and to deploy them in remote sensors. While these applications are intended to carry out the processing entirely within the sensor (for privacy and bandwidth reasons), during their development, researchers may choose to receive samples of audio data of varied sizes when certain ANEs are detected to double check the classification of the noise events. Finally, the researchers can use the platform to remotely configure several options on the sensor, as explained in Section 3.4.

3.4. Sensor Requirements

To handle all features described above, the smart acoustic sensor must fulfill the following requirements:
  • Price: For some time, accurate acoustic sensors based on noise meter commonly found in WASNs cost thousands of euros, see class/type 1 and 2 microphones on Section 4.2. Such high costs limit the number of nodes and, consequently, the coverage of of the network that city administrators can deploy. We envision smart WASNs deployed permanently across the entire city, as well as temporarily in points of interest by both by the public administration and private businesses. Therefore, low-cost is an essential requirement of our sensor.
  • Processing capacity: The sensor should be able to run the SmartSound technology to identify the source of noise events by processing an audio stream in real-time.
  • Storage capacity: The storage requirement is negligible due to the fact that, for privacy, the audio is deleted as soon as it is processed and the label is obtained. This requires very little storage.
  • Microphone: The sensor should be able to analyse raw audio signal and identify the sources of noise that are audible to human ear. Therefore, it should ideally support an operating frequency in the range of 20 Hz to 20 kHz.
  • Power Supply: We assume that the services to be developed by our research group based on WASNs are not critical and that sensors will be located in strategic places with Alternate Current (AC) power supply from the city, such as light posts and buildings facades. Therefore, for this first version of the sensor does not require battery for a power backup system.
  • Wireless communications: The sensor should be able to send the results of the processing to a server and to an IoT platform for visualization; as well as allow on-the-fly configuration of parameters and the replacement of processing algorithms. For such, both WIFI and 3G should be supported.
  • Outdoors exposure: The device has to operate outdoors for long periods of time during which it will be exposed to winds and rains. Therefore, it requires effective protection against adverse atmospheric conditions.
  • Re-configurable: The researchers can configure the sampling frequency (up to 48kHz) and the data precision of the sensor (16 or 32 bits per sample), replace processing algorithms on-the-fly and request audio samples of varied sizes when ANEs of interest are detected. These configurations will be adjusted to develop and test the proof-of-concept applications.

4. Smart Acoustic Sensor Design

This section describes the elements that make up the smart acoustic sensor. We report the hardware components used, and finally, we also briefly describe the main characteristics of the developed software.

4.1. Computing Platform

Nowadays, an embedded system’s core can be based on microcontrollers, microprocessors, Field Programmable Gate Arrays (FPGAs), Digital Signal Processors (DSPs) and Application-Specific Integrated Circuits (ASICs) [] or even High Performance Computing (HPC) devices, such as GPUs.
Embedded systems are mostly based on ARM architectures, which implements three different versions: (i) Cortex-A—application processor cores for a performance-intensive systems, (ii) Cortex-R—high-performance cores for real-time applications and (iii) Cortex-M—microcontroller cores for a wide range of embedded applications.
Table 1 shows some computing platforms used in the literature, some of which are presented in Section 2. We can observe that several different architectures have been used, such as a proprietary 8 or 16 bit Reduced Instruction Set Computer (RISC), an ARM cortex-M, an ARM cortex-A and a 32-bit ×86, depending on the final application or systems requirements.
Table 1. Available platforms in the literature.
An Operating System (OS) such as linux would allow a much easier approach to implement the aforementioned requirements. For this reason, we have chosen a low-cost embedded platform that allows us to run a linux distribution OS with all the drivers required to control the hardware mentioned above.
A Raspberry Pi 3, supplied with a 1.2 GHz 64-bit quad-core ARM Cortex-A53 CPU, has been chosen because it is a commonly used platform for low cost applications, as it has a cost of approximately 30€. It has been increasingly used in the last years by researchers, universities and amateur engineers; all this community behind it are creating and documenting device drivers, tutorials, examples of applications, etc. Moreover, the Raspberry Pi 3 matches our requirements because this platform is distributed with a Bluetooth Low Energy (BLE) and a WiFi modules for wireless communications.

4.2. Microphone

The acquisition circuit and the microphone compose the acoustic acquisition data subsystem. Usually, the selection criteria for microphones are frequency range, with flat frequency response and high sensitivity. In fact, IEC 61672 defines two classes of sound level meters according to the tightness of the error tolerances, where class 1 microphones have a wider frequency range with less error tolerance than class 2 (see Table 2). This applies to the measurement instruments and also to the calibrator.
Table 2. Tolerances of error (in dB) for classes 1 and 2 which are frequency dependant.
However, we also have to take into consideration the trade-off between performance and cost, since keeping the cost low is one of our main requirements.
Table 3 shows some acoustic acquisition systems found in the literature. This table contains low-cost microphone capsules, high-performance microphone capsules, embedded microphones with pre-amplifiers and embedded microphones with pre-amplifiers and an outdoor kit, respectively.
Table 3. A list of some commercial microphones found in the literature. This list is made of capsules, whole microphones, microphones with an outdoor kit and microphones with an outdoor kit and outdoor case.
We have developed the acoustic acquisition system based on the Adafruit I2S MEMS Microphone Breakout–SPH0645LM4H measuring microphone, this device exhibits a good trade-off between performance and price, presenting a flat frequency response as we can see in Figure 4. This microphone has a cost of 6.95€ and a bandwidth between 50 Hz and 16 kHz. Despite the bandwidth being slightly lower than our initial ambitions (between 20 Hz and 20 kHz), this sensor is very affordable and up to 16 kHz includes most of the regular audible sounds that we are interested in. During the study, we reduced the requirements of the sensor because, on the one hand, it made the sensor cheaper, and on the other hand, the fact that the sound pressure level is usually measured in octaves, it works in the range between 31.5 Hz and 16,000 Hz, and the tails of this range are highly attenuated by the A-weight filter. It should be noted that the sensor does not meet the requirements for class 1 and class 2 tolerances for frequencies below 50 Hz, because frequency response has a gain lower than −5 dB instead of −3 dB. Nevertheless, the gain provided by the manufacturer (Figure 4) also depicts that the requirements for class 1 and class 2 are fulfilled in the frequency range between 60 Hz and 16,000 Hz (see Table 2 and Figure 4). On the other hand, the microphone selected for the design of the sensor (Adafruit I2S MEMS Microphone Breakout) exhibits a lower sensibility (−42 dB) than class 1 or class 2 microphone which are in range between −26 dB and −28 dB, as we can observe in Table 3. This means that audios with low sound pressure level may not be noticed. However, the aim of this platform is to measure relevant sound pressure level sounds and to detect the source of those undesirable noises.
Figure 4. Typical free field response normalized to 1 kHz (data collected from []).
Finally, the signal will be captured, processed and transmitted remotely by the Raspberry Pi 3 platform. Currently, we are working on the adaptation of the microphone to outdoors environments, because the commercial solutions based on sound level meter have a prohibitive price for our application. To demonstrate it, complete outdoor microphones are depicted in Table 3 and outdoor kits (microphones are not included) in Table 4.
Table 4. A list of commercial outdoor kits to adapt existing microphones (microphones are not included).

4.3. Power Supply

The device will be powered by connecting to the AC mains of the city, a common practice when deploying urban devices []. In order to power all the electronic components inside it, an Alternate Current to Direct Current (AC-DC) converter is used: The TT Electronics—IoT Solutions SGS-15-5. This Power Supply Unit (PSU) offers one output with fixed voltage: It has a 3 A at 5 V output with a total power of 15 W. This ratings fits our application, since the two components that require more power are the Microprocessor Unit (MPU) with 2.5 A at 5 V and the 3G module, which can be powered at 5 V and has a maximum power consumption during transmission of 2 W. These power requirements are shown in Table 5. Moreover, this PSU has a small footprint size (2.30 × 3.34 inches), which makes it suitable to be integrated in the box and leaves enough room for the other components.
Table 5. Summary of the power requirements of the system.

4.4. Wireless Communications

An important feature of this device is the wireless communication to connect to the IoT platform, to send the results of the processing and data samples to the SmartSense server, and to allow the remote configuration of the sensor (which also reduces human effort and cost).
The device supports both WiFi (module included in the Raspberry Pi 3) for urban locations where Barcelona WiFi is available [] and 3G (external module from Adafruit) for other cases.
As the Raspberry Pi 3 already contains a WiFi module, only the 3G functionality has been added using an Adafruit board. The FONA 3G Cellular Breakout was used, as this board offers a Quad-band 850 MHz Global System for Mobile Communications (GSM), a 900 MHz Extended GSM (EGSM), a 1800 MHz Digital Cellular System (DCS), a 1900 MHz Personal Communications Service (PCS)—connect onto any global GSM network with any 2G Subscriber Identity Module (SIM). AT command with a UART interface can be used to configure and control it with 300, 600, 1200, 4800, 9600, 19,200, 38,400, 57,600, 115,200, 230 k, 461 k, 961 k, 3.2 M, 3.7 M and 4.0 Mbps. This board also contains a Global Positioning System (GPS) module, which may be useful to confirm the sensor position in the city and ensure it matches with the right identifier.

4.5. Boxing

As this sensor will be placed outdoors, it should be protected by a box with an International Protection (IP) rating [] of 65. This IP rating indicates that the box protects the system against hazardous events and the ingress of solid foreign objects. A rating of 6 means “No ingress of dust; complete protection against contact”. Whilst the second digit represents the protection of the equipment inside the enclosure against harmful ingress of water, with a rating of 6 meaning “Water projected in powerful jets (12.5 mm nozzle) against the enclosure from any direction shall have no harmful effects”. An enclosure such as BUD industries PNR-2601-C can be used, which has a price of 10.31€, however a further analysis have to be conducted to select the most suitable option depending on the characteristics of the project such as thermal and humidity isolation, size, price, etc.

4.6. Sensor Proposal

In summary, the MPU proposed is the Raspberry Pi 3, which includes BLE, Wifi and also an OS based on linux. Its power supply, a SGS-15-5, is able to provide up to 15 W at low cost. The microphone, SPH0645LM4H, has a bandwidth between 50 Hz and 15 kHz at a very low cost. Finally, the 3G module can be removed to save money when a Wi-Fi network is available.
All these components are summarized in Table 6, along with their prices. The total cost of the sensor is therefore around 139€.
Table 6. Final components that comprise the presented device in this work.
The simplified block diagram in Figure 5 depicts the main hardware components and how they relate to each other. Figure 6 shows a photo of the sensor.
Figure 5. Block Diagram of the platform including its main hardware components.
Figure 6. Picture of the smart sensor.

4.7. Software Implementation

The software of this smart wireless acoustic sensor has been implemented in Python due to the huge documentation and libraries available online. The algorithm is basically comprised by the three main tasks shown in Figure 7: (i) Acquisition of audio frames, (ii) classification of the source of sound and, finally, (iii) the transfer of the results to Sentilo and the SmartSense platform. This algorithm has a resource consumption of CPU between 55% and 75% and a RAM memory of 8.5%.
Figure 7. Data path and software dependencies of the proposed system.
The Python_Stream.py captures the RAW audio provided by the microphone through an I2S protocol. The data acquisition can be configured remotely modifying the file Config.dat thanks to the Socket.py library. The configuration parameters are the (i) frequency sampling, (ii) the number of bits of each sample, (iii) the data frame size of the audio and (iv) the wireless communication. These parameters, as well as the machine learning algorithms themselves, can be configured remotely from the SmartSense platform. The Classifier.py uses the model previously trained for labeling the audio; the inputs of this task are the raw audio data and the configuration parameters. This task also computes the L e q and obtains the timestamp. Finally, the algorithm grasps the label, the L e q and the timestamp to prepare a data frame and send it wirelessly to the remote server throughout WiFi or 3G.

4.8. Preliminary Results of Data Acquisition

Currently, we have performed preliminary measurements in the street in Netherlands at 32 and 48 ksps to evaluate the data acquisition subsystem. Figure 8a depicts the spectrogram and the A-weighted equivalent noise levels of 5 min of sound acquired at 48 ksps in the Netherlands. Independent sources of sounds have been identified and highlighted to show how they exhibit differences both in time and frequency. These sounds spectrograms with another sensor can be found in [,], where a class II microphone was used to conduct the acquisition of the sounds. The analysis of characteristic sounds such as a siren and a motorbike depicts that they have the same spectrum and time distribution, despite them having been collected in different cities. Therefore, we have validated the ability to capture these sounds with the SPH0645LM4H low-cost microphone.
Figure 8. Spectrograms (upper) and A-weighted equivalent noise levels (bottom) are shown for different source of noises: (a) 5 min of audio capture at 48 ksps in Netherlands, (b) slice 1 (motorbike), (c) slice 2 (motorbike and bus), (d) slice 3 (a siren—police or ambulance).
Finally, this simple experiment shows the system fulfills most of our original requirements:
  • Processing capacity: The sensor was capable of sampling, storage and transmit data in this test. However, more intensive tests have to be conducted to analyse the performance of the whole recognition system. More concretely, of the sound recognition algorithms for a long period of time in order to evaluate the results in real-operation.
  • Microphone: The audio has been sampled at 48 ksps with 18 bits. Moreover, the sensor provides a flat frequency response in the commonly used bands with A-weighted filtering, between 31.5 Hz and 16,000 Hz, which includes most of the regular audible sounds that interests us. Finally, the sensor has a high SNR of 65 dB(A) [] and a quantization noise of 108 dB.
  • Wireless communications: The data has been sent to the remote server throughout the WI-FI connectivity.
  • Re-configurable: The tests have been conducted at 32 and 48 ksps with a precision of 16 bits per sample.

5. Conclusions

In this work, we have reviewed several hardware architectures and devices used in WASNs applications. These platforms have been analyzed to study the suitability for our application considering their strengths and weaknesses.
The main goal of this work is describing a low-cost configurable acoustic sensor that can be deployed rapidly and easily in any city to create smart WASNs. The sensor includes a quad-core ARM Cortex-A53 CPU, Wi-Fi and 3G connectivity, a box with an IP of 65 and an acoustic acquisition system. The later supports a lower frequency range (50 Hz to 15 kHz) than we initially aimed for, but allows us to measure the L e q measurement and to run SmartSound algorithms [,] to most urban sounds we are interested in. The total cost of the sensor is only 139€, and it is extremely easy to assemble and deploy. As such, it can be used to create truly smart WASNs, capable of identifying and more precisely measuring the different sources of noise in a city. With that, we hope to enable the development of a number of smart city services that exploit the rich information that can be extracted from sound.
Additionally, the sensor has been specially designed to be used with the SmartSense platform. As such, it allows a number of configurations to facilitate the development and testing of proof-of-concept applications of the SmartSound technology, enabling fast data collection and test on-site for any environment. On-the-fly configurations include the sample frequency, the data precision, the processing algorithms, and the choice of sending audio samples with the classification of anomalous noise events.
This sensor does not provide a large storage capacity to save raw data, mainly because storage capacity was not a requirement, and we avoided adding a large one to keep the price as low as possible. Moreover, the system has not been provided with an external rechargeable battery, so it needs be connected to the power grid uninterruptedly, which makes it suitable for urban and suburban scenarios, and not so much for remote sensing. Finally, the system has been conceived to transfer small quantities of data, e.g., the L e q values and the labels of the acoustic events instead of raw data. As a result, it does not overload the WASN when it is deployed withing many sensors and it can better protect the citizen privacy in urban environments.
We plan to test the sensor in the near future in two different environments and with two different purposes:
  • Taking our university campus as a SmartCampus living lab, we first plan to install a small WASN just outside the student residence, which is next to the university restaurant, a basketball court and a football field. The aim is to test the sensors and its integration with the SmartSense platform in a nearby, yet real environment for classifying the sources of noise around the student residence.
  • At a second stage, we plan to install the sensor in the center of Barcelona, in a neighborhood that has both high-traffic and restaurants/bars. The goal is to analyse the noise during different times of the day and over a longer period of time to discriminate how much of it is caused by the traffic and leisure activities. According to our contacts in Barcelona city council, complains from neighbours in such areas are common, which noise originated from people/music on bars, people accessing leisure zones/venues and traffic. However, it can be difficult for them to create effective plans if they do not understand the distribution of these noise sources.
We expect that these tests will allow us to perform an initial evaluation of the suitability of the sensor for our purposes (i.e., the creation of smart WASNs and the the rapid development of sound recognition applications). Currently, we have evaluated the spectrogram of a 5 min record collected at 48 ksps in the Netherlands as is depicted in Figure 8 and Figure 9, where the spectrograms differ both in time and frequency for different noise sources.
Figure 9. Spectrograms (upper) and A-weighted equivalent noise levels (bottom) are shown for different source of noises: (a) Slice 4 (motorbike), (b) slice 5 (motorbike), (c) slice 6 (bus) and (d) slice 7 (motorbike).

Author Contributions

All authors have significantly contribute to this work. R.M.A.-P. was involved in the project conceptualization, technical support, and writing; M.H. participated on the comparison with the state-of-the-art, technical support and writing. L.D. was responsible for the project conceptualization, project coordination, and writing. Finally, J.C. performed the sensor development and testing. All authors have read and agreed to the published version of the manuscript.

Funding

The research leading to these results has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 712949 (TECNIOspring PLUS) and from the Agency for Business Competitiveness of the Government of Catalonia.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ACAlternate Current
AC-DCAlternate Current to Direct Current
ANEAnomalous Noise Event
ASICApplication-Specific Integrated Circuits
BLEBluetooth Low Energy
CNOSSOS-EUCommon Noise Assessment Methods in Europe
DCSDigital Cellular System
DSPDigital Signal Processor
ECEuropean Commission
ENDEnvironmental Noise Directive
EUEuropean Union
EGSMExtended GSM
FPGAField Programmable Gate Array
GPRSGeneral Packet Radio Service
GSMGlobal System for Mobile Communications
GPSGlobal Positioning System
GTMResearch Group on Media Technology
HDDHard Drive Disk
HPCHigh Performance Computing
IECInternational Electrotechnical Commission
IDEAIntelligent Distributed Environmental Assessment
IoTInternet of Things
ISOInternational Organization for Standardization
IPInternational Protection
MEMSMicroelectromechanical systems
MPUMicroProcessor Unit
μ C μ Controller
OSOperating System
PCBPrinted Circuit Board
PCSPersonal Communications Service
PSUPower Supply Unit
RISCReduced Instruction Set Computer
RTNRoad Traffic Noise
SIMSubscriber Identity Module
SONYCSounds of New York Project
UNUnited Nations
WASNWireless Acoustic Sensor Networks
WHOWorld Health Organization

References

  1. Mundi, I. World Demographics Profile. Available online: http://www.indexmundi.com/world/demographics_profile.html (accessed on 20 June 2017).
  2. United Nations, Department of Economic and Social Affairs, Population Division. World Population 2015. Available online: https://esa.un.org/unpd/wpp/Publications/Files/World_Population_2015_Wallchart.pdf (accessed on 20 February 2016).
  3. Morandi, C.; Rolando, A.; Di Vita, S. From Smart City to Smart Region: Digital Services for an Internet of Places; Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
  4. Fira de Barcelona. Smart City Expo World Congress, Report 2015. Available online: http://media.firabcn.es/content/S078016/docs/Report_SCWC2015.pdf (accessed on 20 February 2016).
  5. Bouskela, M.; Casseb, M.; Bassi, S.; De Luca, C.; Facchina, M. La ruta hacia las Smart Cities: Migrando de una gestión tradicional a la ciudad inteligente. In Monografía del BID (Sector de Cambio Climático y Desarrollo Sostenible. División de Viviendas y Desarrollo Urbano); IDB-MG-454; Banco Interamericano de Desarrollo: Washington, DC, USA, 2016. [Google Scholar]
  6. Ripoll, A.; Bäckman, A. State of the Art of Noise Mapping in Europe; Internal Report; European Topic Centre: Barcelona, Spain, 2005. [Google Scholar]
  7. European Environment Agency. The European Environment, State and Outlook 2010 (Sythesis). 2010. Available online: http://www.eea.europa.eu/soer/synthesis/synthesis (accessed on 20 February 2016).
  8. Cik, M.; Lienhart, M.; Lercher, P. Analysis of Psychoacoustic and Vibration-Related Parameters to Track the Reasons for Health Complaints after the Introduction of New Tramways. Appl. Sci. 2016, 6, 398. [Google Scholar] [CrossRef]
  9. Cox, P.; Palou, J. Directive 2002/49/EC of the European Parliament and of the Council of 25 June 2002 Relating to the Assessment and Management of Environmental Noise-Declaration by the Commission in the Conciliation Committee on the Directive Relating to the Assessment and Management of Environmental Noise (END). Off. J. Eur. Communities 2002, 189, 2002. [Google Scholar]
  10. European Commission, Joint Research Centre—Institute for Health and Consumer Protection. Common Noise Assessment Methods in Europe (CNOSSOS-EU) for Strategic Noise Mapping Following Environmental Noise Directive 2002/49/EC; European Comission: Brussels, Belgium, 2012. [Google Scholar]
  11. World Health Organization. Environmental Noise Guidelines for the European Region; Technical Report; World Health Organization: Geneva, Switzerland, 2018. [Google Scholar]
  12. Cueto, J.L.; Petrovici, A.M.; Hernández, R.; Fernández, F. Analysis of the impact of bus signal priority on urban noise. Acta Acust. United Acust. 2017, 103, 561–573. [Google Scholar] [CrossRef]
  13. Morley, D.; De Hoogh, K.; Fecht, D.; Fabbri, F.; Bell, M.; Goodman, P.; Elliott, P.; Hodgson, S.; Hansell, A.; Gulliver, J. International scale implementation of the CNOSSOS-EU road traffic noise prediction model for epidemiological studies. Environ. Pollut. 2015, 206, 332–341. [Google Scholar] [CrossRef]
  14. Ruiz-Padillo, A.; Ruiz, D.P.; Torija, A.J.; Ramos-Ridao, Á. Selection of suitable alternatives to reduce the environmental impact of road traffic noise using a fuzzy multi-criteria decision model. Environ. Impact Assess. Rev. 2016, 61, 8–18. [Google Scholar] [CrossRef]
  15. Licitra, G.; Fredianelli, L.; Petri, D.; Vigotti, M.A. Annoyance evaluation due to overall railway noise and vibration in Pisa urban areas. Sci. Total Environ. 2016, 568, 1315–1325. [Google Scholar] [CrossRef]
  16. Bunn, F.; Zannin, P.H.T. Assessment of railway noise in an urban setting. Appl. Acoust. 2016, 104, 16–23. [Google Scholar] [CrossRef]
  17. Iglesias-Merchan, C.; Diaz-Balteiro, L.; Soliño, M. Transportation planning and quiet natural areas preservation: Aircraft overflights noise assessment in a National Park. Transp. Res. Part D Transp. Environ. 2015, 41, 1–12. [Google Scholar] [CrossRef]
  18. Gagliardi, P.; Fredianelli, L.; Simonetti, D.; Licitra, G. ADS-B system as a useful tool for testing and redrawing noise management strategies at Pisa Airport. Acta Acust. United Acust. 2017, 103, 543–551. [Google Scholar] [CrossRef]
  19. Progetto SENSEable PISA. Sensing The City. Description of the Project. Available online: http://senseable.it/ (accessed on 20 February 2016).
  20. Nencini, L.; De Rosa, P.; Ascari, E.; Vinci, B.; Alexeeva, N. SENSEable Pisa: A wireless sensor network for real-time noise mapping. In Proceedings of the EURONOISE, Prague, Czech Republic, 10–13 June 2012; pp. 10–13. [Google Scholar]
  21. Sevillano, X.; Socoró, J.C.; Alías, F.; Bellucci, P.; Peruzzi, L.; Radaelli, S.; Coppi, P.; Nencini, L.; Cerniglia, A.; Bisceglie, A.; et al. DYNAMAP—Development of low cost sensors networks for real time noise mapping. Noise Mapp. 2016, 3, 172–189. [Google Scholar] [CrossRef]
  22. Alías, F.; Alsina-Pagčs, R.M.; Orga, F.; Socoró, J.C. Detection of Anomalous Noise Events for Real-Time Road-Traffic Noise Mapping: The Dynamap’s project case study. Noise Mapp. 2018, 5, 71–85. [Google Scholar] [CrossRef]
  23. Bello, J.P.; Silva, C.; Nov, O.; Dubois, R.L.; Arora, A.; Salamon, J.; Mydlarz, C.; Doraiswamy, H. SONYC: A System for Monitoring, Analyzing, and Mitigating Urban Noise Pollution. Commun. ACM 2019, 62, 68–77. [Google Scholar] [CrossRef]
  24. Zambon, G.; Benocci, R.; Bisceglie, A.; Roman, H.E.; Bellucci, P. The LIFE DYNAMAP project: Towards a procedure for dynamic noise mapping in urban areas. Appl. Acoust. 2016, 124, 52–60. [Google Scholar] [CrossRef]
  25. Bellucci, P.; Peruzzi, L.; Zambon, G. LIFE DYNAMAP project: The case study of Rome. Appl. Acoust. 2017, 117, 193–206. [Google Scholar] [CrossRef]
  26. Socoró, J.C.; Alías, F.; Alsina-Pagès, R.M. An Anomalous Noise Events Detector for Dynamic Road Traffic Noise Mapping in Real-Life Urban and Suburban Environments. Sensors 2017, 17, 2323. [Google Scholar] [CrossRef] [PubMed]
  27. Alsina-Pagès, R.M.; Navarro, J.; Alías, F.; Hervás, M. homesound: Real-time audio event detection based on high performance computing for behaviour and surveillance remote monitoring. Sensors 2017, 17, 854. [Google Scholar] [CrossRef] [PubMed]
  28. Basten, T.; Wessels, P. An overview of sensor networks for environmental noise monitoring. In Proceedings of the 21st International Congress on Sound and Vibration, ICSV 21, Beijing, China, 13–17 July 2014. [Google Scholar]
  29. EU. Directive 2002/49/EC of the European Parliament and the Council of 25 June 2002 relating to the assessment and management of environmental noise. Off. J. Eur. Communities 2002, L189, 12–25. [Google Scholar]
  30. Wang, C.; Chen, G.; Dong, R.; Wang, H. Traffic noise monitoring and simulation research in Xiamen City based on the Environmental Internet of Things. Int. J. Sustain. Dev. World Ecol. 2013, 20, 248–253. [Google Scholar] [CrossRef]
  31. Dekoninck, L.; Botteldooren, D.; Int Panis, L. Sound sensor network based assessment of traffic, noise, and air pollution. In Proceedings of the 10th European Congress and Exposition on Noise Control Engineering (Euronoise 2015), Maastricht, The Netherlands, 31 May–3 June 2015; pp. 2321–2326. [Google Scholar]
  32. Dekoninck, L.; Botteldooren, D.; Panis, L.I.; Hankey, S.; Jain, G.; Karthik, S.; Marshall, J. Applicability of a noise-based model to estimate in-traffic exposure to black carbon and particle number concentrations in different cultures. Environ. Int. 2015, 74, 89–98. [Google Scholar] [CrossRef]
  33. Fišer, M.; Pokorny, F.B.; Graf, F. Acoustic Geo-sensing Using Sequential Monte Carlo Filtering. In Proceedings of the 6th Congress of the Alps Adria Acoustics Association, Graz, Austria, 16–17 October 2014. [Google Scholar]
  34. Filipponi, L.; Santini, S.; Vitaletti, A. Data collection in wireless sensor networks for noise pollution monitoring. In Proceedings of the International Conference on Distributed Computing in Sensor Systems, Santorini Island, Greece, 11–14 June 2008; pp. 492–497. [Google Scholar]
  35. Moteiv Corporation. tmote Sky: Low Power Wireless Sensor Module. Available online: http://www.crew-project.eu/sites/default/files/tmote-sky-datasheet.pdf (accessed on 19 May 2020).
  36. Botteldooren, D.; De Coensel, B.; Oldoni, D.; Van Renterghem, T.; Dauwe, S. Sound monitoring networks new style. In Proceedings of the Acoustics 2011: Breaking New Ground: Annual Conference of the Australian Acoustical Society, Goald Coast, Australia, 2–4 November 2011; pp. 1–5. [Google Scholar]
  37. Domínguez, F.; Cuong, N.T.; Reinoso, F.; Touhafi, A.; Steenhaut, K. Active self-testing noise measurement sensors for large-scale environmental sensor networks. Sensors 2013, 13, 17241–17264. [Google Scholar] [CrossRef]
  38. Bell, M.C.; Galatioto, F. Novel wireless pervasive sensor network to improve the understanding of noise in street canyons. Appl. Acoust. 2013, 74, 169–180. [Google Scholar] [CrossRef]
  39. Paulo, J.; Fazenda, P.; Oliveira, T.; Carvalho, C.; Felix, M. Framework to monitor sound events in the city supported by the FIWARE platform. In Proceedings of the 46o Congreso Español de Acústica, Valencia, Spain, 21–23 October 2015; pp. 21–23. [Google Scholar]
  40. Intel® Edison Compute Module: Hardware Guide. Available online: https://www.intel.com/content/dam/support/us/en/documents/edison/sb/edison-module_HG_331189.pdf (accessed on 19 May 2020).
  41. Paulo, J.; Fazenda, P.; Oliveira, T.; Casaleiro, J. Continuos sound analysis in urban environments supported by FIWARE platform. In Proceedings of the EuroRegio2016/TecniAcústica, Porto, Portugal, 13–15 June 2016; Volume 16, pp. 1–10. [Google Scholar]
  42. Project, S. SONYC: Sounds of New York City. Available online: https://wp.nyu.edu/sonyc/ (accessed on 19 May 2020).
  43. Mydlarz, C.; Salamon, J.; Bello, J.P. The implementation of low-cost urban acoustic monitoring devices. Appl. Acoust. 2017, 117, 207–218. [Google Scholar] [CrossRef]
  44. Salamon, J.; Jacoby, C.; Bello, J.P. A Dataset and Taxonomy for Urban Sound Research. In Proceedings of the 22nd ACM International Conference on Multimedia, Orlando, FL, USA, 3–7 November 2014; ACM: New York, NY, USA, 2014; pp. 1041–1044. [Google Scholar] [CrossRef]
  45. Nencini, L. DYNAMAP monitoring network hardware development. In Proceedings of the 22nd International Congress on Sound and Vibration, Florence, Italy, 12–16 July 2015; pp. 12–16. [Google Scholar]
  46. Malinowski, A.; Yu, H. Comparison of embedded system design for industrial applications. IEEE Trans. Ind. Inform. 2011, 7, 244–254. [Google Scholar] [CrossRef]
  47. Simon, G.; Maróti, M.; Lédeczi, Á.; Balogh, G.; Kusy, B.; Nádas, A.; Pap, G.; Sallai, J.; Frampton, K. Sensor network-based countersniper system. In Proceedings of the 2nd International Conference on Embedded Networked Sensor Systems, Baltimore, MD, USA, 3–5 November 2004; pp. 1–12. [Google Scholar]
  48. Segura-Garcia, J.; Felici-Castell, S.; Perez-Solano, J.J.; Cobos, M.; Navarro, J.M. Low-cost alternatives for urban noise nuisance monitoring using wireless sensor networks. IEEE Sens. J. 2015, 15, 836–844. [Google Scholar] [CrossRef]
  49. Hughes, J.; Yan, J.; Soga, K. Development of wireless sensor network using bluetooth low energy (BLE) for construction noise monitoring. Int. J. Smart Sens. Intell. Syst. 2015, 8, 1379–1405. [Google Scholar] [CrossRef]
  50. Can, A.; Dekoninck, L.; Botteldooren, D. Measurement network for urban noise assessment: Comparison of mobile measurements and spatial interpolation approaches. Appl. Acoust. 2014, 83, 32–39. [Google Scholar] [CrossRef]
  51. knowles.com. SPH0645LM4H-B (Datasheet). Available online: https://www.knowles.com/docs/default-source/model-downloads/sph0645lm4h-b-datasheet-rev-c.pdf?Status=Master&sfvrsn=c1bf77b1_4 (accessed on 3 June 2020).
  52. Mietlicki, F.; Gaudibert, P.; Vincent, B. HARMONICA project (HARMOnised Noise Information for Citizens and Authorities). In Proceedings of the INTER-NOISE and NOISE-CON Congress and Conference Proceedings, New York, NY, USA, 19–22 August 2012; Volume 2012, pp. 7238–7249. [Google Scholar]
  53. Ajuntament.Barcelona.Cat. Wifi Barcelona. Available online: https://ajuntament.barcelona.cat/barcelonawifi/es/welcome.html (accessed on 15 June 2020).
  54. ANSI. Degrees of Protection Provided by Enclosures (IP Code) (Identical National Adoption); ANSI: New York, NY, USA, 2004. [Google Scholar]
  55. Alsina-Pagès, R.M.; Alías, F.; Socoró, J.C.; Orga, F.; Benocci, R.; Zambon, G. Anomalous events removal for automated traffic noise maps generation. Appl. Acoust. 2019, 151, 183–192. [Google Scholar] [CrossRef]
  56. Alías, F.; Socoró, J.C. Description of anomalous noise events for reliable dynamic traffic noise mapping in real-life urban and suburban soundscapes. Appl. Sci. 2017, 7, 146. [Google Scholar] [CrossRef]

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.