Next Article in Journal
Study on the Prisoner’s Dilemma Game Between Humans and Large Language Models Based on Human–Machine Identity Characteristics
Previous Article in Journal
High-Yield Room-Temperature Solution Synthesis of Ge Nanoparticles by Alkalide Reduction for High-Performance Li-Ion Anodes
Previous Article in Special Issue
Influence of Sound Scattering on the Reverberation Time of a Shoebox Auditorium Using Room Acoustics Modelling
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Sonic Explorer: Assessing Angular Structure and Spatial Organization in Sonotopes

Department of Basic and Applied Sciences, Urbino University, 60129 Urbino, Italy
Appl. Sci. 2026, 16(8), 3619; https://doi.org/10.3390/app16083619
Submission received: 9 February 2026 / Revised: 20 March 2026 / Accepted: 2 April 2026 / Published: 8 April 2026

Abstract

Understanding the spatial organization of environmental sounds is essential for linking acoustic patterns with landscape structure and ecological processes. While ecoacoustics has made substantial progress in the temporal and spectral analysis of soundscapes, their directional and spatial components remain comparatively underexplored, particularly through low-cost and scalable approaches. Here we introduce the Sonic Explorer, a lightweight rotational sonic device designed to explore the angular structure and the spatial dynamics of sonotopes, defined as homogeneous spatial sonic units within a soundscape. The system is based on two opposed supercardioid microphones mounted on a rotating platform, coupled with a custom signal-processing framework that analyzes directional variations in sound intensity across frequency classes. Rather than aiming at sound pressure level measurements or full-sphere sound field reconstruction, the Sonic Explorer focuses on detecting spatial contrasts, dominant sound directions, and angular sound patterns relevant to ecological interpretation. Field tests conducted in a human-modified environment demonstrate the ability of the device to identify coherent directional acoustic structures associated with landscape configuration and dominant sound sources. The proposed approach provides a new practical and exploratory tool for landscape and soundscape research, enabling spatially explicit interpretations of sonic environments while maintaining low cost, portability, and adaptability.

1. Introduction

Beyond its long-standing relevance in biological research, sound directionality is increasingly recognized as a key dimension for understanding the structure, functioning, and equity of sonic environments in both natural and human-dominated landscapes. The direction from which an acoustic signal originates represents one of the most fundamental cues in ecoacoustics and in various fields of ecological surveying, such as in the study of bird communities, where it has traditionally been used to delineate breeding territories and describe the spatial behavior of vocal species [1,2,3,4,5,6].
The knowledge of sound direction within a landscape is crucial because sound is not merely a physical phenomenon but a spatial, ecological, and perceptual process that reflects both how environments function and how they are experienced by human and non-human organisms. Sound directionality enables the interpretation of ecological landscape structure by supporting the localization of vocal individuals and populations, the reconstruction of territories, the investigation of social interactions, and the assessment of reproductive dynamics [7]. In this sense, sound acts as a spatial indicator of biological distribution and ecological relationships. The knowledge of sound source direction further supports the distinction between biological and anthropogenic signals, localized versus diffuse noise sources, and local pressures versus distant disturbances. Directionality thus emerges as a key indicator for assessing environmental integrity, naturalness, and ecological health, providing essential information for conservation planning and landscape management actions [8]. Sound propagation and perception are strongly influenced by terrain morphology, vegetation structure, and the presence of natural and built elements [9,10,11,12].
For humans—and for many animal species—sound direction constitutes a primary component of experience, influencing spatial orientation, perceptions of safety, alertness, and well-being. From a human-centered perspective, the analysis of sound directionality integrates a psychoacoustic dimension, moving beyond sound pressure levels to consider how landscapes are perceived, interpreted, and inhabited [13].
Recent advances in ecoacoustics, particularly through the widespread adoption of autonomous recording units [14,15,16,17], have expanded the study of soundscapes by enabling the estimation of acoustically detectable individual density [18], the analysis of spatiotemporal dynamics [19,20,21,22], and detailed behavioral investigations [23]. However, while the temporal and spectral dimensions of soundscapes have been extensively explored, their spatial and directional components remain comparatively underdeveloped in relation to environmental monitoring, sustainable soundscape design, and biodiversity conservation.
The integration of sound source localization into ecoacoustic survey protocols therefore represents a methodological advancement of particular relevance for landscape ecology, biodiversity monitoring, and regenerative acoustic design [24]. Accurate spatial positioning of vocalizations not only improves detection capabilities but also enables the extraction of richer ecological information, including territorial organization, social interactions, reproductive strategies, and responses to environmental and trophic variability [25]. Such spatially explicit information is essential for linking acoustic patterns to landscape configuration and underlying ecological processes. At the same time, incorporating sound directionality fosters closer integration between biological data and the structural, morphological, and anthropogenic features of landscapes [26,27]. This integration supports a more comprehensive understanding of how environmental mosaics influence the distribution of soniferous species and facilitates the development of spatially explicit ecological models for biodiversity prediction and management, particularly in heterogeneous and rapidly changing environments [28,29,30,31].
To fully exploit the ecological and perceptual information embedded in sound directionality, directional data must be interpreted within spatial frameworks capable of organizing acoustic information at the landscape scale. While sound source localization provides point-based and vectorial information on the origin of acoustic signals, its full ecological significance emerges when such directional cues are integrated to describe how acoustic space is structured, connected, and differentiated [13]
Within this spatially explicit interpretation of soundscapes, directionality becomes a foundational element for identifying coherent acoustic areas shaped jointly by sound sources and landscape configuration. This conceptual shift—from individual sonic events to spatially organized acoustic units—establishes the theoretical conditions for the operational use of the sonotope concept.
The introduction of the sonotope concept [32]—defined as an acoustically homogeneous spatial unit within a broader soundscape—has provided a powerful lens for interpreting the spatial organization of environmental sounds. Empirical studies have shown the assemblages of biotic, abiotic, and anthropogenic signals form complex sonic mosaics [33], whose spatial configuration reflects ecological community structure, habitat connectivity, and landscape heterogeneity [34,35,36]. Sonotopes may vary across spatial scales (e.g., adjacent habitats) and temporal scales (diurnal, nocturnal, and seasonal), offering insights into acoustic diversity, ecosystem health, anthropogenic pressure, and actual or potential ecological connectivity [32,37].
In turn, the sonotope is closely related to the notion of the sonic field, defined as the ensemble of acoustic signals emitted by stationary or mobile sources, whose perception depends on both the distance and angle of incidence relative to the receiver. Sonic fields are inherently dynamic and are shaped not only by source composition but also by environmental morphology, atmospheric conditions, and the material properties of landscapes.
Characterizing a sonic field therefore extends beyond simple sound pressure measurements and requires the analysis of spatial, directional, and perceptual components of sound, enabling the reconstruction of the sonic architecture of environments with direct ecological, behavioral, and human-centered implications. Despite the availability of advanced technologies for detecting and analyzing sonic fields, their systematic application in ecoacoustics remains limited. For example, ambisonic and omnidirectional microphone arrays enable researchers to analyze sound directionality, source localization, and spatial dynamics within complex environments. However, these technologies are still primarily used in experimental or artistic contexts and are only marginally integrated into large-scale ecological monitoring and analytical frameworks. Methodological complexity, high spatiotemporal variability of signals, and the need for interdisciplinary integration—spanning acoustics, bioacoustics, landscape ecology, computational modeling, and psychoacoustics—have hindered the development of standardized and scalable protocols. Nevertheless, sonic field analysis represents a promising frontier for advancing ecological theory and practice, providing empirical support for the acoustic habitat hypothesis [38] and for more inclusive, resilient, and equitable approaches to environmental sound assessment [39].
Sound source localization is therefore essential for linking landscape configuration with the spatial distribution of soniferous species and for enabling spatially explicit analytical and predictive models that integrate geographic positioning with inter-source spatial relationships [38]. In this context, sound mapping emerges as a key methodological innovation for ecoacoustics, urban soundscape research, sustainable acoustic planning, and a broader range of applied geosciences.
As previously introduced, two main methodologies are currently used in sonic field investigations: omnidirectional microphone arrays [17,18,19,40,41] and ambisonic microphones [42,43,44]. While microphone arrays can achieve high spatial resolution [45,46,47], they require substantial logistical effort, large numbers of synchronized devices, and precise deployment strategies, limiting their applicability in complex natural environments and heterogeneous urban contexts [48]. Ambisonic microphones enable three-dimensional sound field reconstruction but remain costly, sensitive to environmental conditions, computationally demanding, and constrained by the lack of standardized ecoacoustic protocols [49,50].
In this study, we explore an alternative exploratory approach to sonic field investigation based on directional supercardioid microphones, offering a practical and scalable solution for long-term and adaptive sonotope monitoring. Supercardioid microphones provide strong directionality with reduced structural complexity, lower sensitivity to adverse weather conditions, and limited amplification of background turbulence [51,52,53], making them particularly suitable for low-impact and regenerative monitoring strategies.
Although it is generally assumed that animal species in landscapes characterized by varying degrees of spatial heterogeneity tend to concentrate in areas with higher resource availability [54], the relationship between species-produced acoustic signals and landscape spatial structure remains insufficiently explored. This gap constrains the interpretation of sonic patterns as indicators of the ecological processes organizing landscapes. Addressing this limitation is therefore essential to move from descriptive acoustic surveys toward a landscape ecology-oriented approach capable of supporting spatially explicit management of resources, habitats, and biological communities [55,56].
Building on these considerations, we developed and tested the Sonic Explorer, a sonic device composed of two opposed supercardioid microphones mounted on a rotating platform and integrated with a custom software framework for quantitative and qualitative sound analysis. The system is designed to investigate the spatial dimension of a soundscape through directional scanning, with particular emphasis on the detection and characterization of sonotopes and their relationship with landscape configuration (Figure 1). After frequency- and intensity-based filtering, the spatial arrangement of detected signals enables the reconstruction of original sonic fields and supports integrated acoustic, spatial, and visual analyses. The Sonic Explorer is thus proposed as a versatile experimental tool for ecoacoustics research and future integration with computational, AI-based, and human-centered acoustic design frameworks.

2. Materials and Methods

2.1. Sonic Explorer Prototype

The Sonic Explorer prototype was assembled using commercially available components configured to acquire directional acoustic information through controlled rotation.
Rotating Platform: The system is mounted on an electric rotating plate (YWNYT™, diameter 20 cm, maximum load 8 kg), capable of completing a full 360° clockwise rotation in approximately 14 s, corresponding to a rotational speed of 0.448 rad s−1 (≈0.071 rev s−1). The rotation speed was constant during all measurements.
Microphones: Two supercardioid condenser microphones (Movo™ X3-II, San Francisco, CA, and Edina, MN, USA) were mounted coaxially on the rotating platform, with their acoustic axes oriented in opposite directions (180° apart). The distance between the acoustic foci of the two microphones was 68 cm, corresponding to the minimum separation allowed by the physical dimensions of the microphones while maintaining collinear alignment. Supercardioid microphones were selected for their narrower frontal pickup angle and stronger lateral attenuation compared to conventional cardioid microphones. Structurally, a supercardioid microphone consists of a pressure-sensitive diaphragm coupled with an interference tube featuring lateral apertures (“slots”) (Figure 2).
These apertures allow sound waves arriving from different angles to reach the diaphragm with frequency-dependent phase delays, resulting in constructive or destructive interference. This mechanism shapes the characteristic supercardioid polar pattern, with strong frontal sensitivity, attenuated lateral response, and a residual rear lobe. Manufacturer specifications for the Movo™ X3-II microphones are reported in Figure 3 and Table 1.
Microphone Sensitivity Correction: The relative sensitivity of the two microphones was assessed over 15 h of recording in a low-noise indoor environment, with the microphones mounted on the rotating platform. A Root Mean Square (RMS) sensitivity difference of 5.05 ± 0.30 dB was detected between the two channels. To compensate for this mismatch, a channel-specific gain correction was applied to all recordings using the routine Microphone_correction, described in detail in Supplementary Material.

2.2. Recorder

Audio recordings were acquired using a Zoom F3 field recorder (Zoom Corporation®, Chiyoda-ku, Tokyo, Japan, 2022) operating in stereo mode and equipped with 32-bit floating-point recording technology. This format enables the capture of signals over an extremely wide dynamic range without the need for manual gain adjustment, minimizing both clipping and loss of low-amplitude signals.
Zoom F3 employs dual analog-to-digital conversion paths per channel (optimized for low- and high-amplitude signals), which are internally combined into a single 32-bit floating-point file. This architecture ensures stable recordings under highly variable acoustic conditions, such as those encountered during unattended or long-term field measurements.
Channel sensitivity parity was evaluated using sinusoidal test tones at 250, 500, 1000, 2000, 4000, and 8000 Hz. Each tone was played for 10 s from a MacBook Air and fed into the recorder using a balanced 3.5 mm TRS to dual XLR stereo cable (Tisino™, 2 m, Dexinuo, Hangzhou, China). RMS and peak levels were compared across channels, revealing no significant differences.
Remote operation was enabled via a Zoom BTA-1 Bluetooth adapter, allowing the control of the recorder from a smartphone at distances up to approximately 20 m.

2.3. Acoustic Marker (Buzzer)

An electronic buzzer (Colexy DC 3–24 V, nominal level 95 dB SPL) was used as an acoustic marker to indicate the start and end of each full rotation. The buzzer was powered by four rechargeable AA batteries (1.5 V each) and activated via a micro-switch (Daokai 20PCCS, 1 A nominal current) being triggered when microphone #1 reached the 0° position.
The buzzer emitted a short acoustic signal (≈0.344 s) with a dominant frequency band between 3287 and 3327 Hz, which was clearly distinguishable from the environmental sounds and used for temporal segmentation of the recordings. The sound produced by the buzzer was assumed as a negligeable disturbance to the sonifeorus species, limited to a few meters around the device.

2.4. Power Supply and Support Structure

The rotating platform and recorder were powered by two USB-C PD power banks (INIU™ (Shenzhen Simple Life Technology Co., Ltd., Shenzhen, China), 10,000 mAh), while the buzzer was independently powered by rechargeable AA batteries (EBL™, 2300 mAh).
The system was mounted on a wooden tripod originally designed to support a theodolite. The tripod was carefully leveled prior to each deployment to ensure mechanical stability and accurate rotation. Wood was preferentially used for structural elements to minimize vibration transmission.
Before each recording session, the scanner was aligned along the south–north axis, with 0° corresponding to south and 180° to north. The prototype configuration is shown in Figure 4.

2.5. Principles of Operation of the Sonic Explorer

A single supercardioid microphone provides high directivity but exhibits intrinsic ambiguity due to its residual rear lobe and frequency-dependent side lobes. As a result, signal intensity alone cannot reliably discriminate whether a sound originates from the frontal or rear sensitivity region (Figure 5).
The Sonic Explorer addresses this limitation by employing two supercardioid microphones oriented 180° apart. When a sound source aligns with the frontal lobe of one microphone, it simultaneously aligns with the rear lobe of the other, producing a marked intensity contrast between the two channels. This complementary configuration reduces directional ambiguity and improves robustness in reverberant or acoustically complex environments.
Additionally, the differential comparison between the two channels enhances the contrast between localized directional signals and diffuses background noise. Because each angular position is sampled simultaneously by two opposing polar patterns, the effective acquisition time for a full angular profile is halved compared to a single rotating microphone.
The distance of 68 cm is necessary to keep the microphones perfectly aligned due to their physical size. Therefore, this was a compromise between having the microphones aligned but spaced apart and having them closer together but not perfectly aligned. With this configuration, the combined polar response covers the entire horizontal plane, producing an “octave-petal-like” sensitivity pattern.
As the platform rotates from 0° to 360°, the relative intensity difference between the two microphones varies as a function of source direction. Let θ be the azimuth of the sound source relative to a fixed reference and ϕ the instantaneous rotation angle of the platform. The relative angles perceived by the microphones are θ − ϕ for microphone #1 and θ − ϕ + 180° for microphone #2. The total sensitivity, S, is expressed as:
S ( θ , ϕ ) = M 1 ( θ ϕ ) M 2 ( θ ϕ + 180 ° )
where M1 and M2 denote the responses of microphone #1 and microphone #2, respectively. Peaks in S correspond to the alignment between the sound source and the frontal lobe of one microphone.
In summary, the combination of controlled rotation, complementary directional patterns, and differential signal analysis enables the reconstruction of sound source direction within the horizontal plane.

2.6. Dedicated Software

Five distinct routines were specifically developed for data processing. In the future, it is desirable that these routines be integrated into a single package to enhance computational automation. Currently, they are presented separately with minimal graphical design. Developed in an open-source environment, they can be independently verified and implemented by interested users. The routines, written in Python 3.11, are fully described in Supplementary Material and can be directly copied and executed in a Python environment.
These routines process the data through the following steps (Figure 6):
  • Microphone correction—routine name: Microphone_correction (used when the two microphones exhibit different sensitivities to the same signal).
  • Splitting the stereo recordings into 30 min files—routine name: File_division.
  • Extracting sound segments to be analyzed (each approximately 12.76 s long)—routine name: Signal_segmentation.
  • Identifying the direction of the signals—routine name: Sound_direction.
  • Combining the results from the various segments and presenting them in polar diagrams—routine name: Polar_diagram.
In practice, the WAV files, recorded in stereo format on Zoom F3, with a standard duration of 93 min each, were first subjected to microphone correction (Microphone_correction) and then divided into 30 min segments using the File_division routine to facilitate processing. From each 30 min file, the Signal_segmentation routine extracts 130 segments (here referred to as “segment files”), each approximately 14 s long and delimited by the buzzer signal. The routine can also process files shorter than 30 min, generating a number of segments proportional to the file duration. Subsequently, the Sound_direction routine removes all buzzer signals from both stereo channels, yielding a usable signal length of approximately 12.75 s.
The segmentation of the files was performed using the beep recorded by microphone #1. The buzzer is triggered when microphone #1 passes the 0° (South) position. For each interval of approximately 14 s between two consecutive beeps, the selected segment extends from the end of the first beep to the beginning of the next one, with an additional safety margin applied to ensure precise delimitation. The resulting 130 audio segments (for 30 min files), with the beeps removed, are then exported in WAV format. Segments are discarded if there are too few beeps, if they are excessively short, or if the beeps overlap with environmental sounds, as these conditions may lead to inconsistent segment durations. In particular, the overlap between the beep signal and biological or environmental sounds may prevent the algorithm from correctly identifying the beep pattern. In such (rare) cases, the affected segment is excluded from the analysis. In more detail, the Sound_direction routine processes the signals of each segment through the following steps:
  • Edge cleaning: a 0.20 s safety margin is applied to remove edge artifacts.
  • Frame segmentation: stereo audio is divided into temporal frames. Frame durations were tested at 0.1 s and 0.3 s.
  • Energy analysis: the difference in energy between the two stereo channels is used to determine the direction of the signal.
  • Frequency estimation: the dominant frequency band is identified using an STFT procedure.
Circular Statistics: Directional data are analyzed to compute mean direction, mean resultant length, circular variance, and circular standard deviation.
The mean direction was calculated by transforming the angles into unit vectors in the complex plane, where C is the mean cosine component, and S is the mean sine component:
C = 1 n i = 1 n c o s ( θ i ) ,           S = 1 n i = 1 n s i n ( θ i )
The mean direction is the angle of the mean resultant vector:
θ ¯ = a t a n 2 ( S , C )
The mean resultant length, R, represents how strongly the data are concentrated around the mean direction, and it ranges from 0 to 1.
R = C 2 + S 2
The circular variance, V, measures the dispersion of the angles around the mean direction. It is defined as:
V = 1 − R
The circular standard deviation, σ, is defined as
σ = 2 ln ( R )
The results were saved in an Excel file, containing:
The relative acquisition time (within the interval 0–12.75 s, after the portion of the signal occupied by the buzzer has been removed);
The direction of the signal of maximum intensity, estimated by comparing the two microphones;
The directions associated with microphone #1 and microphone #2;
The maximum energy value detected from the comparison of the two microphones;
The energy corresponding to microphone #1 and microphone #2;
The indication of the microphone with the highest energy;
The dominant frequency bands, divided into classes of 1500 Hz each.
For each original 30 min WAV file, the results from the 130 segments are consolidated into a summary and exported as an Excel workbook or as a CSV (Comma-Separated Values) file.
Polar diagrams (energy/direction) for each 12.75 s segment files corresponding to a 360° rotation were generated. The energy, appropriately normalized, is represented with the maximum value near the center of the graph, decreasing progressively toward the periphery. For each segment, the prevailing directions are shown, distinguished by symbols (a dot or a cross) corresponding to each microphone. A summary spatial statistic appears in the second Excel table, and the third table reports the energy of all frequency bands. Finally, a total summary Excel file “Summary-Total” is created, which compiles all parameters of each segment into a single file. This summary file will be used by the Polar_diagram routine, which is explained below.
The Polar_diagram routine processes the data contained in the “Summary-Total” file, which includes all data processed from each segment. This allows the generation of a series of distinct polar diagrams for 30 min intervals according to the selected signal intensity level.
Intensity filtering can thus be applied to distinguish low-intensity signals (originating from more distant sources) from high-intensity signals (attributable to closer sources). It should be emphasized that no standardized procedure exists for applying such filtering; the approach proposed here should therefore be considered an experimental methodology, provided as guidance for data interpretation.
For multiple 30 min samples, all data from a day or multiple days can be normalized by identifying the maximum and minimum values. To exclude signals too weak for reliable interpretation, a minimum energy threshold is defined, and signals below this threshold are disregarded. Five intensity classes were arbitrarily chosen for the case studies presented below, although this number can be adjusted. For example, using 10 intensity classes would allow for a finer observation of variations.
The filtered data are displayed in a summary polar plot, representing the output of the Polar_diagram routine and encompassing all data from all segments. In this plot, angles indicate signal directions, dots represent the intensity of individual signals, and their colors correspond to the dominant frequency band. The diagram also indicates the cardinal points (S, W, N, and E). Unlike the polar diagrams for individual segments, signals in this summary polar plot are not distinguished by microphone.
The Polar_diagram routine calculates circular statistical parameters:
Weighted mean direction, circular variance (how dispersed the angles are), circular standard deviation, mean energy, and its corresponding radial position are displayed alongside the graph, and the mean direction is highlighted with a green arrow. Each graph is automatically saved as a PNG file in a dedicated folder.
During execution of the Polar_diagram routine, the user can choose between two operating modes: (i) batch mode, in which all graphs are generated and saved sequentially, and (ii) manual mode, in which the user manually decides whether to proceed to the next graph.
The collected data may contain numerous outliers, both low and high, which can complicate normalization. To address this, a Winsorizing procedure—a statistical transformation that limits extreme values to reduce the impact of spurious outliers—was applied. This technique, named after an engineer and later biostatistician called Charles P. Winsor (1895–1951), does not remove extreme values entirely; instead, it “resets” them to specific percentiles (or to the lower and upper percentiles) of the data. For example, a 90% Winsorization sets all data below the 5th percentile to the value of the 5th percentile and all data above the 95th percentile to the value of the 95th percentile. Winsorized estimators are generally more robust to outliers than standard estimators, although alternative approaches, such as trimming, also exist.

2.7. Study Area and Field Deployment

The Sonic Explorer was experimentally deployed at the Ortolano Rural Sanctuary [55], an area characterized by minimal anthropogenic disturbance, which provides suitable habitat for a rich animal community. This site has previously been used for ecoacoustic and behavioral studies, making it an ideal setting for testing the scanner [56].

2.8. Materials

The functionality of the hardware (rotating platform, buzzer, and F3 Zoom) was tested continuously for 10 days at station n. 3 (Figure 7) in August 2025 without revealing any functioning problems.
The reliability of the Sonic Explorer’s directionality was evaluated according to the experimental protocol described in detail in File S1 of the Supplementary Materials. The results show that in 98.33% of cases, the detected direction aligns with the angle between the loudspeaker and the Sonic Explorer within ±1 step (8.37°). This angle is obtained by dividing 360° by the 43 assigned angular intervals. The number of intervals could be increased to achieve a finer resolution, but this would generate a larger amount of data for processing and ultimately depends on the objectives of the research.
In addition, the recording sessions were conducted under the most favorable environmental conditions. Rainy days were carefully excluded, as the prototype is not yet waterproof; the exposure to rain could cause hardware malfunctions and compromise recording quality. To mitigate these risks, the installation of a small, lightweight garden gazebo is recommended to protect the scanner from rain, nighttime dew, and direct sunlight, which—especially during the summer time—could induce thermal stress on the device.
The directionality of signals emitted by nocturnal insects producing persistent sounds was tested using a 90 min recording (captured on 26 August 2025, at 10:06 pm, near a small vineyard; Figure 6, site no. 2), where numerous tree crickets (Oecanthus pellucens, Insecta, Orthoptera) were present. From this recording, the second 30 min segment—particularly rich in biological signals—was extracted and designated as night #2.
The variation in sound directionality between nighttime and daytime across different intensity classes was analyzed over a 24 h period—from 06:00 on August 30 to 06:00 on 31 August 2025—using recordings from files part_001 to part_024 collected at site 3 (Figure 7).

3. Results

3.1. Case Study No. 1: Segment 23, Part#2

From the segment shown in Figure 8, the presence of songs and calls of the red-billed leiothrix (Leiothrix lutea, Aves, Passeriformes) is evident, with microphone #2 dominating over microphone #1 across six 0.3 s frames at 270°.
This sequence is followed by a predominant call of the red-billed leiothrix, captured by microphone #2 at 360°, and subsequently by a song primarily detected by microphone #1 between 270° and 360°, lasting approximately eight 0.3 s frames. All other signals were very weak and could be attributed to either microphone.
In Figure 9, the polar diagram constructed using the Sound_direction routine is shown. Signal directions are indicated according to the dominant microphone. The arrow represents the average direction and intensity of all signals, and the symbol colors indicate the dominant frequency band.
In Figure S1, the footprint of cumulative energy for each frequency bin is shown over a 12.75 s segment.
In Table S3, the data are presented in a numerical form for each 0.3 s frame and include the following parameters: time progression of each frame, orientation of the maximum energy (alternately attributed to one or the other microphone), energy values captured by each microphone, direction of the signals for both the dominant and individual microphones, and the corresponding dominant frequency band. Table 2 reports the mean direction, mean resultant length, circular variance, and circular standard deviation.

3.2. Case Study No. 2: Segment 25, Part#2

A call of the carrion crow (Corvus corone cornix) is represented at 180° for microphone #1, and the song of the red-billed leiothrix is represented at 270° for microphone n #1 (Figure 10 and Figure 11). All remaining signals are low in intensity and contain little information; nevertheless, they were assigned to one of the two microphones. In Table S2, the parameters related to the orientation of the signals and their intensity are reported. Table 3 reports the mean direction, mean resultant length, circular variance, and circular standard deviation.

3.3. Case Study No. 3: Segment 58, Part#2

An alarm call of a blackcap (Sylvia atricapilla, Aves, Passeriformes) is shown in Figure 12. The signal is almost equally assigned to both microphones due to its persistence, remaining active throughout the full rotation of the two microphones, which therefore alternate in recording the maximum intensity. Table S3 reports the parameters related to signal orientation and intensity. For each segment, the energy value of each frequency bin with a width of 3.3 Hz is also reported as shown in Figure 13 and Table 4.

3.4. Case Study No. 4: File Part #2

The overall polar diagram of all 130 segments contained in file “part #2” was generated using the Polar_diagram routine. The signals shown in this polar diagram can be filtered by energy level into different intensity classes; in our case, five classes were adopted (Figure 14).
In addition, the Polar_diagram routine, which reads the combined data from all segments, can query the archive and easily verify the position of each point, tracing it back to the segment from which that particular signal originates. The data are then normalized and Winsorized to reduce the influence of outliers. The discrete positioning of points in the polar diagram, aligned along equally spaced rays, results from the temporal discretization introduced by the windowed analysis, with each window lasting 0.3 s. If a higher angular resolution (i.e., denser and more evenly distributed points) is desired, two options are available:
  • Reducing the window duration within the routine (e.g., 0.1 s instead of 0.3 s), which generates more frames and smaller angular steps. A comparison between the two resolutions is presented later.
  • Overlapping windows (overlap); each frame starts before the previous one ends (typical in audio analyses with STFT).
By moving the mouse over the polar diagram, it is possible to access information about individual signals. For example, we selected intensity class 4 and identified the highest-intensity signal within this class (Figure 15), which corresponds to the sound of an agricultural tractor operating in close proximity to the Sonic Explorer (Figure 16).

3.5. Case Study No. 5: Night#2

The distribution of nocturnal insect signals recorded during the first half of the night is presented. The dominant 1500 Hz signal is produced by the tree cricket (Oecanthus pellucens, Insecta, Orthoptera), while the 5000 Hz signal originates from another, unidentified orthopteran. The tree cricket’s signal is continuous but exhibits peak stridulations, which are captured by the analysis system. The polar representation of all signals with an energy threshold equal to or greater than 0.00176 is shown in Figure 17A. Filtering out lower-intensity signals (f.i., 0.021 filter as in Figure 17B) allows the identification of a few distinct sources, likely corresponding to individual singing crickets (Figure 18).
In this case, a temporal resolution of 0.3 s was used, dividing the segment into 43 frames. Increasing the temporal resolution to 0.1 s would divide the same 12.75 s segment into 132 frames, providing a finer temporal resolution. This improvement is illustrated in Figure 19, where the same file from the previous example (Figure 17A) is analyzed at 0.1 s resolution.
The difference in temporal resolution is readily apparent in the polar diagrams of individual segments. For example, comparing segment 46 from file night #2 (Figure 20A) at 0.3 s and 0.1 s resolution (Figure 20B) shows that 132 positions are detected at 0.1 s resolution compared to 43 positions at 0.3 s.

3.6. Case Study No. 6: 31 August 2025

Due to the pause in bird vocal activity following the breeding season, the recordings utilized for these examples contained relatively few daytime sounds, with nighttime sounds—primarily produced by various cricket species—predominating. We utilized eight WAV files, each representing a 3 h recording, which were divided into 16 sub-files of 30 min, each using the File_division routine. These sub-files were subsequently processed with the Signal_segmentation routine. The directions obtained from the Sound_direction routine applied to the 24 h recordings were compiled into a single Excel file, after which the Polar_diagram routine was applied.
The resulting data were classified according to 11 intensity classes. The first diagram, shown in Figure 21, displays all recorded signals, while the subsequent diagrams show the signals corresponding to each of the 10 remaining intensity classes. Values in each diagram are normalized. Each intensity class was further divided into two time periods—night and day—to highlight temporal differences in the distribution of signal directions. More refined temporal subdivisions could be applied; however, in this case, they would likely provide limited additional information due to the silent periods of many singing species. The figure shows a clear segregation between day and night. In particular, the nighttime distribution is more concentrated, whereas the daytime distribution spans 0–135° and 280–360° (Figure 21). This approach allows the verification of daily dynamics and can also be used to identify key signals, as illustrated in the previous figures.

4. Discussion

The proposed system was not designed to provide absolute sound level measurements (dB or dBA) comparable to those obtained with a calibrated sound level meter but rather to detect the directionality and spatial distribution of sound sources. The primary objective is therefore qualitative and spatial rather than quantitative in terms of sound pressure level (SPL). Absolute microphone calibration is desirable but not critical for directional analysis under the sonotope model.
The tests were conducted under variable environmental conditions in order to assess the robustness of the Sonic Explorer in real-world scenarios. The effects of wind and precipitation were mitigated through microphone orientation and physical protection of the devices; however, the methodology does not preclude the possibility of additional measurements under controlled conditions. The aim is to evaluate the practicality of the system in natural and urban environments, where wind and atmospheric phenomena are unavoidable. Sound source typology is determined through spectral analysis and qualitative comparison with reference recordings rather than through absolute sound pressure levels.
The Sonic Explorer is not intended to replace systems such as omnidirectional arrays or ambisonic microphones but to provide a practical, scalable, and low-cost method for the directional estimation and spatial characterization of sonotopes. Validation against professional systems may be considered as a future step, but it is not essential for demonstrating the functionality and applicability of the prototype for qualitative soundscape studies. The initial tests conducted on the system’s ability to detect the direction of a sound source yielded highly positive results. Naturally, the accuracy of directional estimation largely depends on environmental conditions, as well as on the type of sound sources and their distances.
We evaluated the functionality of the hardware, and, although this Sonic Explorer is a prototype, it proved to be very reliable and structurally robust. No interruptions or malfunctions were ever recorded despite the simplicity of the hardware configuration. The Zoom F3 recorder also demonstrated consistent reliability over time.
The simplicity of the system requires no setup other than aligning the microphones with the buzzer switch. Since the system consists of a rotating plate and a separate microphone assembly, it can be mounted on a tripod and aligned using a compass and a bubble level. The orientation of the system is defined both by the direction of the rotating plate—which was always fixed toward the north—and by the presence of a buzzer that emits a 0.1 s beep each time microphone #1 passes by, corresponding to stereo channel 1.
From the first beep to the second, the microphones complete a full 360° rotation, allowing the angular direction of each recorded signal to be determined at any given moment. During the 12.75 s of active rotation, two synchronized microphones operate simultaneously, meaning that every point on the horizon is scanned twice within this interval.
We selected a rotation period of 12.75 s, corresponding to a linear speed of 0.14 m/s, as a compromise between higher speeds—which could introduce physical disturbances due to air friction on the microphone capsules (typically noticeable above 1–3 m/s)—and lower speeds, which would reduce scanning precision by capturing different sound events from nearly identical microphone positions. Further optimization of the scanning speed remains possible when operating in different types of landscapes.
Dividing the 12.75 s recording into 0.3 s frames proved to be a reasonable setting, although shorter frames of 0.1 s were also tested providing more details. A shorter frame increases data resolution but introduces the risk of oversampling if the biological signal lasts longer than 0.1 s and also slows down computation. This limitation must therefore be carefully considered. There is, however, no universally optimal frame size when environmental sounds vary greatly in duration—as in the case of long songs, such as those of the blackcap or the red-billed leiothrix, compared to very short alarm calls like those of the European robin (Erithacus rubecula). Naturally, this system has limitations, as sound signals are not point-like in an idealized sense.
The sensitivity of this prototype for adverse weather conditions requires the use of a protective cover, such as a gazebo, which I have subsequently tested successfully, although it is not presented in this initial work. This solution effectively eliminates problems related to the impact of rain and dew on the microphone capsules and the recorder, but it introduces additional noise during rainfall events. Regardless of the waterproofing attempts applied to the apparatus, the friction of raindrops on the hardware— as occurs with all field recorders currently used in ecoacoustic research—can be mitigated but not completely eliminated.
Currently, sound recordings are stored on micro SD cards (32 or 64 GB). In future developments, recordings could be transmitted via cellular networks directly to a remote server, integrating with existing automated systems—such as the one operating through the GSM protocol at http://ecosoundscape.it (accessed on 29 November 2025)—which has been successfully running for several years in the same environmental context.
In this initial evaluation of the complete procedure, we focused on assessing the system’s ability to identify the dominant direction of each signal. We found it effective to divide the collected data into intensity thresholds or classes, allowing the analysis to focus on higher-energy signals. These, in turn, provide calibration for distinguishing between “near-field” and “far-field” spatial information regarding the location of sound sources, whether geophonic, biological, or technophonic.
The Sonic Explorer eliminates the need for complex microphone arrays and operates effectively even with incoherent or noisy sounds. Rotation enhances angular resolution as the differential response continuously varies with direction. Unlike two fixed microphones—which may yield ambiguous results for lateral or rear sound sources, where inter-microphone differences do not clearly indicate direction—the rotating system ensures that each source passes through successive angles of maximum sensitivity, thereby reducing directional uncertainty.
Although each microphone has a highly directional supercardioid pattern, rotating the system enables scanning of the entire surrounding environment, eliminating blind spots and ensuring that all directions are detectable. By recording across multiple angular positions, the resulting signals can be averaged to better distinguish sound sources.
The use of separate routines has simplified the testing phase of the Sonic Explorer; however, a single integrated routine would be more functional and user-friendly for regular operation. In the near future, the Microphone_correction, File_division, Signal_segmentation, Sound_direction, and Polar_diagram routines could be merged into a unified software package, enhanced with graphical features to facilitate easier navigation and interaction with the processed data.
However, the procedure implemented by the Sonic Explorer represents a promising advancement in the field of ecoacoustics, introducing dedicated hardware and software to investigate the spatial dimension of environmental sound, an important aspect of this field [21].
This advancement will enable a deeper understanding of the daily and seasonal dynamics of sonotopes—comprising their three components: geophony, biophony, and anthropophony—and will allow for the assessment of the influence of the surrounding landscape on the occurrence of these sound types. This will make it possible to investigate the various spatial configurations of environmental mosaics and their influence on the distribution of soniferous species. In particular, moving the Sonic Explorer across a landscape will enable a more accurate characterization of sonotopes and the assessment of the influence of landscape spatial arrangement on soniferous species activity.
Furthermore, the routines developed within this framework facilitate efficient navigation of large acoustic databases, enabling researchers to identify specific acoustic events or anomalies that would otherwise be extremely time-consuming to detect manually. The ability to isolate individual intensity classes and visualize their spatial distribution opens a new avenue of inquiry, complementing traditional ecoacoustic approaches that rely on the interpretation of acoustic indices.
The Sonic Explorer can be used to monitor landscape structure and dynamics because the spatial dimension of the sounds it detects can be correlated with landscape characteristics, combining its advanced capabilities for investigating sound directionality with the predictive power of ecoacoustic indices [57,58]. The Sonic Explorer represents a tool capable of enhancing our understanding of the ecological role of sounds, as it allows the spatial distribution of sounds to be assessed across a wide range of landscape configurations, from intact natural areas to urban environments. Consequently, the spatial characteristics of landscapes can be effectively evaluated by overlaying spatial sound maps with landscape maps, potentially addressing the challenge of determining the appropriate scale at which sound should be correlated with the landscape [59].
The characterization of sound sources in the prototype was designed to be multimodal, combining both spectral analysis and direct listening of the signals in their dominant direction.
  • Spectral analysis, as the primary method, allows the identification and discrimination of sound sources based on frequency content, duration, and temporal modulation, providing an objective and repeatable method for signal classification. This approach is particularly effective in complex environmental contexts, where multiple sources coexist.
  • Selective listening for validation and further interpretation: direct listening to the signals in their dominant direction serves as a qualitative control, useful for confirming source identity or interpreting complex signals that may be difficult to distinguish using spectrograms alone (e.g., similar vocalizations among species or anthropogenic sounds with harmonic components).
  • Integration of the two approaches: the combination of quantitative spectral analysis and qualitative listening enables a more complete and reliable characterization of sound sources, balancing objective precision with human interpretative capability without introducing significant bias.
The procedure employed with the Sonic Explorer differs from those typically used in ecoacoustics, where autonomous recording stations collect large amounts of data independently of the operator. Conversely, deploying a single Sonic Explorer across different landscape configurations requires greater operator involvement in the field. This aspect may, on the one hand, represent a limitation in the use of the Sonic Explorer, but on the other hand, it can lead to a more informed and conscious application of this technique, enabling a more efficient coupling between the acoustic and landscape dimensions. Furthermore, the ability to operate within predefined intensity classes will make it possible to observe variations between distant and nearby sonic fields and, for sounds with a known source intensity, to estimate not only their direction but also their distance, thereby increasing the amount of useful information collected in the field.
It is evident that specific spatial indices for describing the patterns detected by the Sonic Explorer have not yet been established. Consequently, our analyses are currently limited to the descriptive phase of spatial sonic patterns.
However, we are confident that, in the near future, it will be possible to process spatial data in the form of more advanced indices capable of interacting with some of the most widely used metrics in ecoacoustics. For instance, the spatial dimension of sound quantified with the Sonic Explorer could be associated with sonic indices such as the Sonic Heterogeneity Index (SHItf), the Spectral Variability, or the Effective Number of Frequency Bins (ENFB) [42].

5. Conclusions

The proposed system is not designed to provide absolute sound level measurements comparable to those obtained with a calibrated sound level meter but rather to detect the directionality and spatial distribution of sound sources. Direction is determined by the difference in intensity between two microphones: for the same signal, the microphone recording the highest intensity indicates the source direction within a 360° range. Sound source typology is assessed through spectral analysis and qualitative comparison with reference recordings, consistent with the ecoacoustic objectives of the Sonic Explorer.
The Sonic Explorer is not intended to replace omnidirectional arrays or ambisonic microphones but to provide a practical, scalable, and low-cost method for directional estimation and spatial characterization of sonotopes. It can be easily built without sophisticated engineering, making it accessible to everyone. Horizontal rotation allows the effective sampling of the dominant component of the sound field, which is often concentrated on the terrestrial plane. Continuous scanning enables rapid detection of the main source directions without covering the entire sphere, which is only necessary for highly specialized applications such as fully immersive acoustic studies or complete ambisonic recordings.
The “two opposing microphones” configuration maximizes the discrimination between front and rear lobes, improving directional accuracy in the horizontal plane. While real supercardioid microphones differ slightly from the theoretical models, the system operates effectively in real-world environments swith complex ecological and urban soundscapes.
This system is suitable for landscapes where tree heights do not exceed 25–30 m, but it is not recommended for tropical or boreal old-growth forests, where taller trees and soniferous species active at considerable heights may limit performance.
Overall, the Sonic Explorer represents a methodological advancement in sonotope characterization, offering a practical, cost-effective, and easily replicable approach for the spatial detection of environmental sounds. It provides valuable directional data for sonotopes, ecological mapping, and soundscape studies that would otherwise be difficult to obtain without substantial investment.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/app16083619/s1.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used in this paper are available on demand at the following address: almo.farina@uniurb.it.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Blondel, J.; Ferry, C.; Frochot, B. Point counts with unlimited distance. Stud. Avian Biol. 1981, 6, 414–420. [Google Scholar]
  2. Ralph, C.J.; Sauer, J.R.; Droege, S. (Eds.) Monitoring Bird Populations by Point Counts; General Technical Report PSW-GTR-149; U.S. Department of Agriculture, Forest Service, Pacific Southwest Research Station: Albany, CA, USA, 1995.
  3. Bibby, C.J.; Burgess, N.D.; Hill, D.A.; Mustoe, S.H. Bird Census Techniques, 2nd ed.; Academic Press: Cambridge, MA, USA, 2000. [Google Scholar]
  4. McCallum, D.A. A conceptual guide to detection probability for point counts and other count-based survey methods. In Proceedings of the Bird Conservation Implementation and Integration in the Americas, Proceedings of the Third International Partners in Flight Conference; Ralph, C.J., Rich, T.D., Eds.; U.S. Department of Agriculture, Forest Service, Pacific Southwest Research Station: Albany, CA, USA, 2005; Volume 2, pp. 737–744. [Google Scholar]
  5. Sutherland, W.J. Ecological Census Techniques: A Handbook, 2nd ed.; Cambridge University Press: Cambridge, UK, 2006. [Google Scholar]
  6. Furnas, B.J.; Callas, R.L. Using automated recorders and occupancy modeling to monitor common forest birds across a large geographic region. J. Wildl. Manag. 2015, 79, 325–337. [Google Scholar] [CrossRef]
  7. Sangermano, F. Acoustic diversity of forested landscapes: Relationships to habitat structure and anthropogenic pressure. Landsc. Urban Plan. 2022, 226, 104508. [Google Scholar] [CrossRef]
  8. Schirpke, U.; Ebner, M. Mapping spatio-temporal patterns of soundscapes in a mountain landscape. Appl. Geogr. 2025, 183, 103722. [Google Scholar] [CrossRef]
  9. Morton, E.S. Ecological sources of selection on avian sounds. Am. Nat. 1975, 109, 17–34. [Google Scholar] [CrossRef]
  10. Richards, D.G.; Wiley, R.H. Reverberations and amplitude fluctuations in the propagation of sound in a forest: Implications for animal communication. Am. Nat. 1980, 115, 381–399. [Google Scholar] [CrossRef]
  11. Bullen, R.; Fricke, F. Sound propagation through vegetation. J. Sound Vib. 1982, 80, 11–23. [Google Scholar] [CrossRef]
  12. Attenborough, K. Review of ground effects on outdoor sound propagation from continuous broadband sources. Appl. Acoust. 1988, 24, 289–319. [Google Scholar] [CrossRef]
  13. Aletta, F.; Kang, J.; Axelsson, Ö. Soundscape descriptors and a conceptual framework for developing predictive soundscape models. Landsc. Urban Plan. 2016, 149, 65–74. [Google Scholar] [CrossRef]
  14. Sueur, J.; Farina, A. Ecoacoustics: The ecological investigation and interpretation of environmental sound. Biosemiotics 2015, 8, 493–502. [Google Scholar] [CrossRef]
  15. Farina, A.; Gage, S.H. (Eds.) Ecoacoustics: The Ecological Role of Sounds; John Wiley & Sons: Hoboken, NJ, USA, 2017. [Google Scholar]
  16. Shonfield, J.; Bayne, E.M. Autonomous recording units in avian ecological research: Current use and future applications. Avian Conserv. Ecol. 2017, 12, 14. [Google Scholar] [CrossRef]
  17. Farina, A.; Li, P. Methods in Ecoacoustics; Springer International Publishing: Cham, Switzerland, 2021. [Google Scholar]
  18. Dawson, D.K.; Efford, M.G.; Sutherland, D.R. Bird population density estimated from acoustic signals. J. Appl. Ecol. 2009, 46, 1201–1209. [Google Scholar] [CrossRef]
  19. Mennill, D.J.; Burt, J.M.; Fristrup, K.M.; Vehrencamp, S.L. Accuracy of an acoustic location system for monitoring the position of duetting songbirds in tropical forest. J. Acoust. Soc. Am. 2006, 119, 2832–2839. [Google Scholar] [CrossRef]
  20. Mennill, D.J.; Vehrencamp, S.L. Context-dependent functions of avian duets revealed by microphone-array recordings and multispeaker playback. Curr. Biol. 2008, 18, 1314–1319. [Google Scholar] [CrossRef] [PubMed]
  21. Blumstein, D.T.; Mennill, D.J.; Clemins, P.; Girod, L.; Yao, K.; Patricelli, G.; Deppe, J.L.; Krakauer, A.H.; Clark, C.; Cortopassi, K.A.; et al. Acoustic monitoring in terrestrial environments using microphone arrays: Applications, technological considerations and prospectus. J. Appl. Ecol. 2011, 48, 758–767. [Google Scholar] [CrossRef]
  22. Taylor, P.D.; Crewe, T.L.; Mackenzie, S.A.; Lepage, D.; Aubry, Y.; Crysler, Z.; Finney, G.; Francis, C.M.; Guglielmo, C.G.; Hamilton, D.J.; et al. The Motus Wildlife Tracking System: A collaborative research network to enhance the understanding of wildlife movement. Avian Conserv. Ecol. 2017, 12, 8. [Google Scholar] [CrossRef]
  23. Wang, Y.; Ye, J.; Li, X.; Borchers, D.L. Towards automated animal density estimation with acoustic spatial capture-recapture. Biometrics 2024, 80, ujae081. [Google Scholar] [CrossRef]
  24. Teixeira, D.; Maron, M.; van Rensburg, B.J. Bioacoustic monitoring of animal vocal behavior for conservation. Conserv. Sci. Pract. 2019, 1, e72. [Google Scholar] [CrossRef]
  25. Buxton, R.T.; McKenna, M.F.; Clapp, M.; Meyer, E.; Stabenau, E.; Angeloni, L.M.; Crooks, K.; Wittemyer, G. Efficacy of extracting indices from large-scale acoustic recordings to monitor biodiversity. Conserv. Biol. 2018, 32, 1174–1184. [Google Scholar] [CrossRef]
  26. Crunchant, A.S.; Isaacs, J.T.; Piel, A.K. Localizing wild chimpanzees with passive acoustics. Ecol. Evol. 2022, 12, e8902. [Google Scholar] [CrossRef]
  27. Ortiz-Rodríguez, D.O.; Guisan, A.; Holderegger, R.; van Strien, M.J. Predicting species occurrences with habitat network models. Ecol. Evol. 2019, 9, 10457–10471. [Google Scholar] [CrossRef]
  28. Gallé, R.; Tölgyesi, C.; Császár, P.; Bátori, Z.; Gallé-Szpisjak, N.; Kaur, H.; Maák, I.; Torma, A.; Batáry, P. Landscape structure is a major driver of plant and arthropod diversity in natural European forest fragments. Ecosphere 2022, 13, e3905. [Google Scholar] [CrossRef]
  29. Wallis, D.; Elmeros, M. Tracking European bat species with passive acoustic directional monitoring. arXiv 2020. [Google Scholar] [CrossRef]
  30. Sánchez-Giraldo, C.; Ayram, C.C.; Daza, J.M. Environmental sound as a mirror of landscape ecological integrity in monitoring programs. Perspect. Ecol. Conserv. 2021, 19, 319–328. [Google Scholar] [CrossRef]
  31. Bourquin, A.; Pretzsch, H.; Seidel, D. Forest structural heterogeneity positively affects bird richness and acoustic diversity. Front. Ecol. Evol. 2024, 12, 1387879. [Google Scholar] [CrossRef]
  32. Guagliumi, G.; Canedoli, C.; Potenza, A.; Zaffaroni-Caorsi, V.; Benocci, R.; Pa-doa-Schioppa, E.; Zambon, G. Unraveling Soundscape Dynamics: The Interaction Between Vegetation Structure and Acoustic Patterns. Sustainability 2025, 17, 4204. [Google Scholar] [CrossRef]
  33. Farina, A. Soundscape Ecology; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar]
  34. Farina, A.; Mullet, T.C. Sonotope patterns within a mountain beech forest of Northern Italy: A methodological and empirical approach. Front. Ecol. Evol. 2024, 12, 1341760. [Google Scholar] [CrossRef]
  35. Hedfors, P. Site Soundscapes—Landscape Architecture in the Light of Sound. Ph.D. Dissertation, Department of Landscape Planning Ultuna Uppsala, Alnarp, Uppsala, 2003. ISSN 1401-6249, ISBN 91-576-6425-0. [Google Scholar]
  36. Farina, A.; Gage, S.H.; Salutari, P. Perspectives on the ecological role of geophysical sounds. Front. Ecol. Evol. 2021, 9, 748398. [Google Scholar] [CrossRef]
  37. Farina, A.; Mullet, T.C.; Bazarbayeva, T.A.; Tazhibayeva, T.; Polyakova, S.; Li, P. Sonotopes reveal dynamic spatio-temporal patterns in a rural landscape of Northern Italy. Front. Ecol. Evol 2023, 11, 1205272. [Google Scholar] [CrossRef]
  38. Mullet, T.C.; Farina, A.; Gage, S.H. The acoustic habitat hypothesis: An ecoacoustics perspective on species habitat selection. Biosemiotics 2017, 10, 319–336. [Google Scholar] [CrossRef]
  39. Hedfors, P.; Berg, P.G. The Sounds of Two Landscape Settings: Auditory concepts for physical planning and design. Landsc. Res. 2003, 28, 245–263. [Google Scholar] [CrossRef]
  40. DeAngelis, D.L.; Yurek, S. Spatially explicit modeling in ecology: A review. Ecosystems 2017, 20, 284–300. [Google Scholar] [CrossRef]
  41. Hedley, R.W.; Huang, Y.; Yao, K. Direction-of-arrival estimation of animal vocalizations for monitoring animal behavior and improving estimates of abundance. Avian Conserv. Ecol. 2017, 12. [Google Scholar] [CrossRef][Green Version]
  42. Farina, A.; Mullet, T.C. The Sonoscape of a Rural Town in the Mediterranean Region: A Case Study of Fivizzano. Acoustics 2025, 7, 23. [Google Scholar] [CrossRef]
  43. Middlicott, C.J.; Wiggins, B.J.; Wiggins, B. Development of ambisonic microphone design tools—Part 1. In Proceedings of the Audio Engineering Society Conference, New York City, NY, USA, 17–20 October 2018. [Google Scholar]
  44. Ahrens, J. Ambisonic encoding of signals from spherical microphone arrays. arXiv 2022. [Google Scholar] [CrossRef]
  45. Crutchfield, J.P.; Dunn, D.D.; Jurgens, A. Whales in Space: Experiencing aquatic animals in their natural place with the Hydroambiphone. arXiv 2023. [Google Scholar] [CrossRef]
  46. Mennill, D.J.; Battiston, M.; Wilson, D.R.; Foote, J.R.; Doucet, S.M. Field test of an affordable, portable, wireless microphone array for spatial monitoring of animal ecology and behaviour. Methods Ecol. Evol. 2012, 3, 704–712. [Google Scholar] [CrossRef]
  47. Olivieri, M.; Bastine, A.; Pezzoli, M.; Antonacci, F.; Abhayapala, T.; Sarti, A. Acoustic imaging with circular microphone array: A new approach for sound field analysis. IEEE/ACM Trans. Audio Speech Lang. Process. 2024, 32, 1750–1761. [Google Scholar] [CrossRef]
  48. Zhao, S.; Ma, F. A circular microphone array with virtual microphones based on acoustics-informed neural networks. J. Acoust. Soc. Am. 2024, 156, 405–415. [Google Scholar] [CrossRef]
  49. Farina, A.; Mullet, T.C. Composition and dynamics of the sonosphere along a soil-surface ecotone at an agricultural site in Northern Italy: A Preliminary Approach. Geosciences 2025, 15, 34. [Google Scholar] [CrossRef]
  50. Murillo, D.; Fazi, F.; Shin, M. Evaluation of Ambisonics Decoding Methods with Experimental Measurements; European Acoustics Association: Madrid, Spain, 2014. [Google Scholar]
  51. Zotter, F.; Frank, M. Ambisonics: A Practical 3D Audio Theory for Recording, Studio Production, Sound Reinforcement, and Virtual Reality; Springer Nature: Berlin/Heidelberg, Germany, 2019; p. 210. [Google Scholar]
  52. Backman, J. Parabolic reflector microphones. In Proceedings of the AES 112th Convention; AES: New York, NY, USA, 2002; p. 5499. [Google Scholar]
  53. Maher, R.C.; Chen, Z. Parabolic Dish Microphone System; Montana State University: Bozeman, MT, USA, 2005. [Google Scholar]
  54. Young, R.W. Reflector microphones for field recording of natural sounds. J. Acoust. Soc. Am. 2000, 108, 258. [Google Scholar] [CrossRef]
  55. Farina, A. Rural sanctuary: An ecosemiotic agency to preserve human cultural heritage and biodiversity. Biosemiotics 2018, 11, 139–158. [Google Scholar] [CrossRef]
  56. Farina, A.; James, P. The landscape of fear as a safety eco-field: Experimental evidence. Biosemiotics 2023, 16, 61–84. [Google Scholar] [CrossRef] [PubMed]
  57. Rappaport, D.I.; Royle, J.A.; Morton, D.C. Acoustic space occupancy: Combining ecoacoustics and lidar to model biodiversity variation and detection bias across heterogeneous landscapes. Ecol. Indic. 2020, 113, 106172. [Google Scholar] [CrossRef]
  58. Fuller, S.; Axel, A.C.; Tucker, D.; Gage, S.H. Connecting soundscape to landscape: Which acoustic index best describes landscape configuration? Ecol. Indic. 2015, 58, 207–215. [Google Scholar] [CrossRef]
  59. Gitau, C.; Kettel, E.; Abrahams, C.; Webala, P.W.; Uzal, A. The role of ecoacoustics in monitoring ecosystem degradation and restoration. Restor. Ecol. 2025, 33, e70168. [Google Scholar] [CrossRef]
Figure 1. Representation of the Sonic Explorer operating within a rural landscape, identifying distinct sound sources (f.i., birds, amphibians, and insects), adopting ad hoc routines, and projecting them onto a polar diagram (blue dots). Different landscapes are expected to produce distinct spatial arrangements, reflecting the availability of resources necessary for habitat establishment, persistence, and spatial organization of populations and community of soniferous species. Red dots represent the original sonic signals, while blue dots indicate their projection onto a polar diagram.
Figure 1. Representation of the Sonic Explorer operating within a rural landscape, identifying distinct sound sources (f.i., birds, amphibians, and insects), adopting ad hoc routines, and projecting them onto a polar diagram (blue dots). Different landscapes are expected to produce distinct spatial arrangements, reflecting the availability of resources necessary for habitat establishment, persistence, and spatial organization of populations and community of soniferous species. Red dots represent the original sonic signals, while blue dots indicate their projection onto a polar diagram.
Applsci 16 03619 g001
Figure 2. A supercardioid microphone consists of a front capsule and side-openings that allow the sound from the sides and rear to reach the diaphragm with a slight delay. This delay causes certain sound waves to either cancel out (destructive interference) or reinforce each other (constructive interference), depending on the direction from which they arrive.
Figure 2. A supercardioid microphone consists of a front capsule and side-openings that allow the sound from the sides and rear to reach the diaphragm with a slight delay. This delay causes certain sound waves to either cancel out (destructive interference) or reinforce each other (constructive interference), depending on the direction from which they arrive.
Applsci 16 03619 g002
Figure 3. Frequency response and polar diagram of the supercardioid condense microphone X3-II. The sound arriving from the sides is partially canceled, while the sound from the rear reaches the diaphragm with a phase shift, resulting in reduced but not completely eliminated sensitivity. Consequently, supercardioid microphones exhibit low rear sensitivity (MOVO, X3_II supercardioid Condenser Microphone Datasheet, 2023).
Figure 3. Frequency response and polar diagram of the supercardioid condense microphone X3-II. The sound arriving from the sides is partially canceled, while the sound from the rear reaches the diaphragm with a phase shift, resulting in reduced but not completely eliminated sensitivity. Consequently, supercardioid microphones exhibit low rear sensitivity (MOVO, X3_II supercardioid Condenser Microphone Datasheet, 2023).
Applsci 16 03619 g003
Figure 4. Prototype of the Sonic Explorer: the components include a rotating plate, a buzzer, a Zoom F3 recorder, and two MOVO supercardioid microphones (San Francisco, CA, and Edina, MN, USA). Wood was extensively used for support bases to minimize vibrations. The prototype lacks protection against rain and nighttime dew and therefore can only be used in dry weather. During data collection, setting up a lightweight garden gazebo proved to be an effective temporary solution for protecting the prototype while conducting the experiments described in this study.
Figure 4. Prototype of the Sonic Explorer: the components include a rotating plate, a buzzer, a Zoom F3 recorder, and two MOVO supercardioid microphones (San Francisco, CA, and Edina, MN, USA). Wood was extensively used for support bases to minimize vibrations. The prototype lacks protection against rain and nighttime dew and therefore can only be used in dry weather. During data collection, setting up a lightweight garden gazebo proved to be an effective temporary solution for protecting the prototype while conducting the experiments described in this study.
Applsci 16 03619 g004
Figure 5. An example of signal coverage using a single microphone versus two opposing microphones.
Figure 5. An example of signal coverage using a single microphone versus two opposing microphones.
Applsci 16 03619 g005
Figure 6. Flowchart depicting the stages of audio data processing from microphone correction to Polar diagram visualization.
Figure 6. Flowchart depicting the stages of audio data processing from microphone correction to Polar diagram visualization.
Applsci 16 03619 g006
Figure 7. Zenithal view of the study area with the distribution of the recording stations indicated by numbered red dots. The Rosaro River borders the western side of the study area in Fivizzano municipality (Northern Italy), (44°14′14.84″ N, 10°07′08.94″ E, 250 m a.s.l.). This image has been adapted from [49], Figure 1.
Figure 7. Zenithal view of the study area with the distribution of the recording stations indicated by numbered red dots. The Rosaro River borders the western side of the study area in Fivizzano municipality (Northern Italy), (44°14′14.84″ N, 10°07′08.94″ E, 250 m a.s.l.). This image has been adapted from [49], Figure 1.
Applsci 16 03619 g007
Figure 8. Spectrogram obtained from the segment 23 of the file “part#2” recorded with the Sonic Explorer. Signals correspond to the song and call of red-billed leiothrix (Leiothrix lutea, Aves, Passeriformes). The colored bar shows the dominant microphone over the 12.75 s interval (0.3 s frame resolution). Spatial positions are given by each microphone’s angular reference; the microphone #2 is 180° opposite microphone #1.
Figure 8. Spectrogram obtained from the segment 23 of the file “part#2” recorded with the Sonic Explorer. Signals correspond to the song and call of red-billed leiothrix (Leiothrix lutea, Aves, Passeriformes). The colored bar shows the dominant microphone over the 12.75 s interval (0.3 s frame resolution). Spatial positions are given by each microphone’s angular reference; the microphone #2 is 180° opposite microphone #1.
Applsci 16 03619 g008
Figure 9. Polar diagram of segment 23. Dots indicate signals recorded by microphone #1, while crosses represent those captured by microphone #2. The color of each symbol corresponds to the dominant frequency band, as shown in the legend. The distance from the center represents the normalized energy level of the signals, inverted so that the maximum value lies at the center and the minimum at the periphery of the diagram. The arrow indicates the mean direction (in degrees) of the spatial distribution.
Figure 9. Polar diagram of segment 23. Dots indicate signals recorded by microphone #1, while crosses represent those captured by microphone #2. The color of each symbol corresponds to the dominant frequency band, as shown in the legend. The distance from the center represents the normalized energy level of the signals, inverted so that the maximum value lies at the center and the minimum at the periphery of the diagram. The arrow indicates the mean direction (in degrees) of the spatial distribution.
Applsci 16 03619 g009
Figure 10. Spectrogram of segment 25 from file part #2, recorded with the Sonic Explorer. The signals correspond to the call of a carrion crow (Corvus corone cornix, Aves, Passeriformes) at 180°, and the song of a red-billed leiothrix (Leiothrix lutea, Aves, Passeriformes) at 270°. The colored bar indicates the dominant microphone over the 12.765 s interval (0.3 s frame resolution). Spatial positions are referenced to each microphone’s angular orientation; microphone #2 is positioned 180° opposite microphone # 1.
Figure 10. Spectrogram of segment 25 from file part #2, recorded with the Sonic Explorer. The signals correspond to the call of a carrion crow (Corvus corone cornix, Aves, Passeriformes) at 180°, and the song of a red-billed leiothrix (Leiothrix lutea, Aves, Passeriformes) at 270°. The colored bar indicates the dominant microphone over the 12.765 s interval (0.3 s frame resolution). Spatial positions are referenced to each microphone’s angular orientation; microphone #2 is positioned 180° opposite microphone # 1.
Applsci 16 03619 g010
Figure 11. Polar diagram of segment 25 from file part #2. Dots represent signals recorded by microphone #1, while crosses indicate those captured by microphone #2. The color of each symbol corresponds to the dominant frequency band, as shown in the legend. The distance from the center represents the normalized energy level of the signals, inverted so that the maximum value lies at the center and the minimum at the periphery of the diagram. The arrow indicates the mean direction and its length the average signal intensity.
Figure 11. Polar diagram of segment 25 from file part #2. Dots represent signals recorded by microphone #1, while crosses indicate those captured by microphone #2. The color of each symbol corresponds to the dominant frequency band, as shown in the legend. The distance from the center represents the normalized energy level of the signals, inverted so that the maximum value lies at the center and the minimum at the periphery of the diagram. The arrow indicates the mean direction and its length the average signal intensity.
Applsci 16 03619 g011
Figure 12. Spectrogram from segment 58 of file part#2 recorded with the Sonic Explorer. The dominant signal is attributed to microphone #1 for approximately half of the 0.3 s frames, while microphone #2 predominates during the second half. The direction of the song was determined through intensity filtering: only the maximum intensity recorded by each microphone was used to identify the singing position of a song attributed to a blackcap (Sylvia atricapilla), under the assumption that the bird emitted its alarm notes from a fixed location.
Figure 12. Spectrogram from segment 58 of file part#2 recorded with the Sonic Explorer. The dominant signal is attributed to microphone #1 for approximately half of the 0.3 s frames, while microphone #2 predominates during the second half. The direction of the song was determined through intensity filtering: only the maximum intensity recorded by each microphone was used to identify the singing position of a song attributed to a blackcap (Sylvia atricapilla), under the assumption that the bird emitted its alarm notes from a fixed location.
Applsci 16 03619 g012
Figure 13. Polar diagram of segment 58 from file part#2. Dominant signals are shown as dots (microphone #1) and crosses (microphone #2). Colors denote the dominant frequency band, as specified in the legend. The radial distance represents the normalized energy level, inverted such that maximum values are at the center and minimum values at the periphery. The arrow indicates the mean direction, and its length the average signal intensity.
Figure 13. Polar diagram of segment 58 from file part#2. Dominant signals are shown as dots (microphone #1) and crosses (microphone #2). Colors denote the dominant frequency band, as specified in the legend. The radial distance represents the normalized energy level, inverted such that maximum values are at the center and minimum values at the periphery. The arrow indicates the mean direction, and its length the average signal intensity.
Applsci 16 03619 g013
Figure 14. Example of a summary polar diagram for 30 min of recording (file part #2) where no intensity classes are selected, (A) and the same diagram observed after the selection of the intensity class 2 from the five previously defined classes (B).
Figure 14. Example of a summary polar diagram for 30 min of recording (file part #2) where no intensity classes are selected, (A) and the same diagram observed after the selection of the intensity class 2 from the five previously defined classes (B).
Applsci 16 03619 g014
Figure 15. Polar diagram of file part #2, obtained by selecting intensity class 4, which spans a range from 0.01086 to 0.01488. The information window shown in the diagram appears when the mouse hovers over a selected colored dot. The strongest signal is located in segment #108 at frame (event time) 29.
Figure 15. Polar diagram of file part #2, obtained by selecting intensity class 4, which spans a range from 0.01086 to 0.01488. The information window shown in the diagram appears when the mouse hovers over a selected colored dot. The strongest signal is located in segment #108 at frame (event time) 29.
Applsci 16 03619 g015
Figure 16. The spectrogram obtained from the segment 108 (see Figure 14). The signal was produced by a Carraro 3700 Tigrone™ agricultural tractor passing a few meters from the sonic scanner.
Figure 16. The spectrogram obtained from the segment 108 (see Figure 14). The signal was produced by a Carraro 3700 Tigrone™ agricultural tractor passing a few meters from the sonic scanner.
Applsci 16 03619 g016
Figure 17. (A) Polar diagram obtained from a 30 min recording (Night #2). The arrow indicates the mean direction points toward the small vineyard, the source of the singing tree cricket (Oecanthus pellucens, Insecta, Orthoptera). (B) Applying a threshold of 0.021 filters out most weak signals, highlighting the stronger ones, which—according to the tooltip—correspond to segment 120.
Figure 17. (A) Polar diagram obtained from a 30 min recording (Night #2). The arrow indicates the mean direction points toward the small vineyard, the source of the singing tree cricket (Oecanthus pellucens, Insecta, Orthoptera). (B) Applying a threshold of 0.021 filters out most weak signals, highlighting the stronger ones, which—according to the tooltip—correspond to segment 120.
Applsci 16 03619 g017
Figure 18. Spectrogram obtained from the segment 120 (see Figure 17). The continuous signal around 1500 Hz is attributed to the tree cricket (Oecanthus pellucens, Insecta, Orthoptera), while the intermittent signal at 5000 Hz corresponds to another, unidentified orthopteran species.
Figure 18. Spectrogram obtained from the segment 120 (see Figure 17). The continuous signal around 1500 Hz is attributed to the tree cricket (Oecanthus pellucens, Insecta, Orthoptera), while the intermittent signal at 5000 Hz corresponds to another, unidentified orthopteran species.
Applsci 16 03619 g018
Figure 19. Distribution of signals captured by the Sonic Explorer during a 30 min recording (night #2) in the first part of the night, with data sampled every 0.1 s. The alignment of the mean direction arrow with the vineyard—from which most tree cricket (Oecanthus pellucens, Insecta, Orthoptera) signals originated—is clearly evident, as confirmed through direct observation.
Figure 19. Distribution of signals captured by the Sonic Explorer during a 30 min recording (night #2) in the first part of the night, with data sampled every 0.1 s. The alignment of the mean direction arrow with the vineyard—from which most tree cricket (Oecanthus pellucens, Insecta, Orthoptera) signals originated—is clearly evident, as confirmed through direct observation.
Applsci 16 03619 g019
Figure 20. Comparison of segment 46 from night #2, scanned at 0.3 s (A) and 0.1 s (B) temporal resolution. The greater number of signal points in polar diagram B, as well as the slightly different mean direction, are clearly noticeable compared with polar diagram A.
Figure 20. Comparison of segment 46 from night #2, scanned at 0.3 s (A) and 0.1 s (B) temporal resolution. The greater number of signal points in polar diagram B, as well as the slightly different mean direction, are clearly noticeable compared with polar diagram A.
Applsci 16 03619 g020
Figure 21. Polar diagrams of signal intensity levels obtained from 24 h of continuous recording. The vectors indicate the mean direction and signal intensity, while colors denote light conditions: blue for the dark period and red for the light period.
Figure 21. Polar diagrams of signal intensity levels obtained from 24 h of continuous recording. The vectors indicate the mean direction and signal intensity, while colors denote light conditions: blue for the dark period and red for the light period.
Applsci 16 03619 g021
Table 1. Technical specifications of the Movo™ X3-II supercardioid condenser microphone.
Table 1. Technical specifications of the Movo™ X3-II supercardioid condenser microphone.
ParameterValue
Acoustic principleCondenser
Polar patternSupercardioid
Frequency response60 Hz–20 kHz
Sensitivity−36 dB ± 3 dB (0 dB = 1 V Pa−1 @ 1 kHz)
Signal-to-noise ratio80 dB
Output impedance200 Ω
Output3-pin XLR
High-pass filter150 Hz
Power requirement24–48 V phantom power or 1.5 V AA battery
DimensionsØ22 × 280 mm
Weight≈129 g
Table 2. Mean direction, mean resultant length, circular variance, and circular standard deviation of the segment 23 of the file part#2.
Table 2. Mean direction, mean resultant length, circular variance, and circular standard deviation of the segment 23 of the file part#2.
MetricValue
Mean direction (deg)9.93
Mean resultant length (R)0.39
Circular variance0.60
Circular std dev1.36
Table 3. Mean direction, mean resultant length, circular variance, and circular standard deviation of the segment 25 of the file part#2.
Table 3. Mean direction, mean resultant length, circular variance, and circular standard deviation of the segment 25 of the file part#2.
MetricValue
Mean direction (deg)89.15
Mean resultant length (R)0.52
Circular variance0.47
Circular std dev1.14
Table 4. Mean direction, mean resultant length, circular variance, and circular standard deviation of the segment 58 of the file part#2.
Table 4. Mean direction, mean resultant length, circular variance, and circular standard deviation of the segment 58 of the file part#2.
MetricValue
Mean direction (deg)96.82
Mean resultant length (R)0.59
Circular variance0.40
Circular std dev1.02
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Farina, A. The Sonic Explorer: Assessing Angular Structure and Spatial Organization in Sonotopes. Appl. Sci. 2026, 16, 3619. https://doi.org/10.3390/app16083619

AMA Style

Farina A. The Sonic Explorer: Assessing Angular Structure and Spatial Organization in Sonotopes. Applied Sciences. 2026; 16(8):3619. https://doi.org/10.3390/app16083619

Chicago/Turabian Style

Farina, Almo. 2026. "The Sonic Explorer: Assessing Angular Structure and Spatial Organization in Sonotopes" Applied Sciences 16, no. 8: 3619. https://doi.org/10.3390/app16083619

APA Style

Farina, A. (2026). The Sonic Explorer: Assessing Angular Structure and Spatial Organization in Sonotopes. Applied Sciences, 16(8), 3619. https://doi.org/10.3390/app16083619

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop