Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (32)

Search Parameters:
Keywords = pitch discrimination

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 2054 KiB  
Article
Perception and Interpretation of Contrastive Pitch Accent During Spoken Language Processing in Autistic Children
by Pumpki Lei Su, Duane G. Watson, Stephen Camarata and James Bodfish
Languages 2025, 10(7), 161; https://doi.org/10.3390/languages10070161 - 28 Jun 2025
Viewed by 466
Abstract
Although prosodic differences in autistic individuals have been widely documented, little is known about their ability to perceive and interpret specific prosodic features, such as contrastive pitch accent—a prosodic signal that places emphasis and helps listeners distinguish between competing referents in discourse. This [...] Read more.
Although prosodic differences in autistic individuals have been widely documented, little is known about their ability to perceive and interpret specific prosodic features, such as contrastive pitch accent—a prosodic signal that places emphasis and helps listeners distinguish between competing referents in discourse. This study addresses that gap by investigating the extent to which autistic children can (1) perceive contrastive pitch accent (i.e., discriminate contrastive pitch accent differences in speech); (2) interpret contrastive pitch accent (i.e., use prosodic cues to guide real-time language comprehension); and (3) the extent to which their ability to interpret contrastive pitch accent is associated with broader language and social communication skills, including receptive prosody, pragmatic language, social communication, and autism severity. Twenty-four autistic children and 24 neurotypical children aged 8 to 14 completed an AX same–different task and a visual-world paradigm task to assess their ability to perceive and interpret contrastive pitch accent. Autistic children demonstrated the ability to perceive and interpret contrastive pitch accent, as evidenced by comparable discrimination ability to neurotypical peers on the AX task and real-time revision of visual attention based on prosodic cues in the visual-world paradigm. However, autistic children showed significantly slower reaction time during the AX task, and a subgroup of autistic children with language impairment showed significantly slower processing of contrastive pitch accent during the visual-world paradigm task. Additionally, speed of contrastive pitch accent processing was significantly associated with pragmatic language skills and autism symptom severity in autistic children. Overall, these findings suggest that while autistic children as a group are able to discriminate prosodic forms and interpret the pragmatic function of contrastive pitch accent during spoken language comprehension, differences in prosody processing in autistic children may be reflected not in accuracy, but in speed of processing measures and in specific subgroups defined by language ability. Full article
(This article belongs to the Special Issue Advances in the Acquisition of Prosody)
Show Figures

Figure 1

17 pages, 666 KiB  
Article
English-Learning Infants’ Developing Sensitivity to Intonation Contours
by Megha Sundara and Sónia Frota
Languages 2025, 10(7), 148; https://doi.org/10.3390/languages10070148 - 20 Jun 2025
Viewed by 349
Abstract
In four experiments, we investigated when and how English-learning infants perceive intonation contours that signal prosodic units. Using visual habituation, we probed infants’ ability to discriminate disyllabic sequences with a fall versus a rise in pitch on the final syllable, a salient cue [...] Read more.
In four experiments, we investigated when and how English-learning infants perceive intonation contours that signal prosodic units. Using visual habituation, we probed infants’ ability to discriminate disyllabic sequences with a fall versus a rise in pitch on the final syllable, a salient cue used to distinguish statements from questions. First, we showed that at 8 months, English-learning infants can distinguish statement falls from question rises, as has been reported previously for their European Portuguese-learning peers who have extensive experience with minimal pairs that differ just in pitch rises and falls. Next, we conducted three experiments involving 4-month-olds to determine the developmental roots of how English-learning infants begin to tune into these intonation contours. In Experiment 2, we showed that unlike 8-month-olds, monolingual English-learning 4-month-olds are unable to distinguish statement and question intonation when they are presented with segmentally varied disyllabic sequences. Monolingual English-learning 4-month-olds only partially succeeded even when tested without segmental variability and a sensitive testing procedure (Experiment 3). When tested with stimuli that had been resynthesized to remove correlated duration cues as well, 4-month-olds demonstrated only partial success (Experiment 4). We discuss our results in the context of extant developmental research on how infants tune into linguistically relevant pitch cues in their first year of life. Full article
(This article belongs to the Special Issue Advances in the Acquisition of Prosody)
Show Figures

Figure 1

20 pages, 618 KiB  
Systematic Review
Music and Language in Williams Syndrome: An Integrative and Systematic Mini-Review
by Jérémy Villatte, Agnès Lacroix, Laure Ibernon, Christelle Declercq, Amandine Hippolyte, Guillaume Vivier and Nathalie Marec-Breton
Behav. Sci. 2025, 15(5), 595; https://doi.org/10.3390/bs15050595 - 29 Apr 2025
Viewed by 785
Abstract
Individuals with Williams syndrome (WS) are known for their interest in language and music. As producing and comprehending music and language usually involve a set of similar or comparable cognitive abilities, the music–language relationship might be of interest to better understand WS. We [...] Read more.
Individuals with Williams syndrome (WS) are known for their interest in language and music. As producing and comprehending music and language usually involve a set of similar or comparable cognitive abilities, the music–language relationship might be of interest to better understand WS. We identified, analyzed, and synthesized research articles on music and language among individuals with WS. Three different databases were searched (SCOPUS, PubMed, PsycInfo). Eight research articles were identified after screening, based on title, abstract and full text. In this integrative–systematic review, we assess methodologies, report findings and examine the current understanding of several subdimensions of the relationship between music and language. The findings suggest that basic musical abilities such as tone, rhythm and pitch discrimination are correlated with several verbal skills, particularly the understanding of prosody. Musical practice seems to benefit individuals with WS, in particular for prosody understanding and verbal memory. A correlation was also observed between emotional responsiveness to music and verbal ability. Further studies are needed to better characterize the relationship between music and language in WS. The clinical use of musical practice could be of interest in improving prosodic skills and verbal memory, which deserves extended experimental investigation. Full article
(This article belongs to the Section Developmental Psychology)
Show Figures

Figure 1

25 pages, 9418 KiB  
Article
Angle-Controllable SAR Image Generation for Target Recognition with Few Samples
by Xilin Wang, Bingwei Hui, Wei Wang, Pengcheng Guo, Lei Ding and Huangxing Lin
Remote Sens. 2025, 17(7), 1206; https://doi.org/10.3390/rs17071206 - 28 Mar 2025
Viewed by 452
Abstract
The availability of high-quality and ample synthetic aperture radar (SAR) image datasets is crucial for understanding and recognizing target characteristics. However, in practical applications, the limited availability of SAR target images significantly impedes the advancement of SAR interpretation methodologies. In this study, we [...] Read more.
The availability of high-quality and ample synthetic aperture radar (SAR) image datasets is crucial for understanding and recognizing target characteristics. However, in practical applications, the limited availability of SAR target images significantly impedes the advancement of SAR interpretation methodologies. In this study, we introduce a Generative Adversarial Network (GAN)-based approach designed to manipulate the target azimuth angle with few samples, thereby generating high-quality target images with adjustable angle ranges. The proposed method consists of three modules: a generative fusion local module conditioned on image features, a controllable angle generation module based on sparse representation, and an angle discrimination module based on scattering point extraction. Consequently, the generative modules fuse semantically aligned features from different images to produce diverse SAR samples, whereas the angle synthesis module constructs target images within a specified angle range. The discriminative module comprises a similarity discriminator to distinguish between authentic and synthetic images to ensure the image quality, and an angle discriminator to verify that generated images conform to the specified range of the azimuth angle. Combining these modules, the proposed methodology is capable of generating azimuth angle-controllable target images using only a limited number of support samples. The effectiveness of the proposed method is not only verified through various quality metrics, but also examined through the enhanced distinguishability of target recognition methods. In our experiments, we achieved SAR image generation within a given angle range on two datasets. In terms of generated image quality, our method has significant advantages over other methods in metrics such as FID and SSIM. Specifically, the FID was reduced by up to 0.37, and the SSIM was increased by up to 0.46. In the target recognition experiments, after augmenting the data, the accuracy improved by 6.16% and 3.29% under two different pitch angles, respectively. This demonstrates that our method has great advantages in the SAR image generation task, and the research content is of great value. Full article
Show Figures

Figure 1

15 pages, 4108 KiB  
Article
Vocal Emotion Perception and Musicality—Insights from EEG Decoding
by Johannes M. Lehnen, Stefan R. Schweinberger and Christine Nussbaum
Sensors 2025, 25(6), 1669; https://doi.org/10.3390/s25061669 - 8 Mar 2025
Viewed by 1006
Abstract
Musicians have an advantage in recognizing vocal emotions compared to non-musicians, a performance advantage often attributed to enhanced early auditory sensitivity to pitch. Yet a previous ERP study only detected group differences from 500 ms onward, suggesting that conventional ERP analyses might not [...] Read more.
Musicians have an advantage in recognizing vocal emotions compared to non-musicians, a performance advantage often attributed to enhanced early auditory sensitivity to pitch. Yet a previous ERP study only detected group differences from 500 ms onward, suggesting that conventional ERP analyses might not be sensitive enough to detect early neural effects. To address this, we re-analyzed EEG data from 38 musicians and 39 non-musicians engaged in a vocal emotion perception task. Stimuli were generated using parameter-specific voice morphing to preserve emotional cues in either the pitch contour (F0) or timbre. By employing a neural decoding framework with a Linear Discriminant Analysis classifier, we tracked the evolution of emotion representations over time in the EEG signal. Converging with the previous ERP study, our findings reveal that musicians—but not non-musicians—exhibited significant emotion decoding between 500 and 900 ms after stimulus onset, a pattern observed for F0-Morphs only. These results suggest that musicians’ superior vocal emotion recognition arises from more effective integration of pitch information during later processing stages rather than from enhanced early sensory encoding. Our study also demonstrates the potential of neural decoding approaches using EEG brain activity as a biological sensor for unraveling the temporal dynamics of voice perception. Full article
(This article belongs to the Special Issue Sensing Technologies in Neuroscience and Brain Research)
Show Figures

Figure 1

17 pages, 1898 KiB  
Article
Musical Pitch Perception and Categorization in Listeners with No Musical Training Experience: Insights from Mandarin-Speaking Non-Musicians
by Jie Liang, Fen Zhang, Wenshu Liu, Zilong Li, Keke Yu, Yi Ding and Ruiming Wang
Behav. Sci. 2025, 15(1), 30; https://doi.org/10.3390/bs15010030 - 31 Dec 2024
Cited by 1 | Viewed by 1208
Abstract
Pitch is a fundamental element in music. While most previous studies on musical pitch have focused on musicians, our understanding of musical pitch perception in non-musicians is still limited. This study aimed to explore how Mandarin-speaking listeners who did not receive musical training [...] Read more.
Pitch is a fundamental element in music. While most previous studies on musical pitch have focused on musicians, our understanding of musical pitch perception in non-musicians is still limited. This study aimed to explore how Mandarin-speaking listeners who did not receive musical training perceive and categorize musical pitch. Two experiments were conducted in the study. In Experiment 1, participants were asked to discriminate musical tone pairs with different intervals. The results showed that the nearer apart the tones were, the more difficult it was to distinguish. Among adjacent note pairs at major 2nd pitch distance, the A4–B4 pair was perceived as the easiest to differentiate, while the C4–D4 pair was found to be the most difficult. In Experiment 2, participants completed a tone discrimination and identification task with the C4–D4 and A4–B4 musical tone continua as stimuli. The results revealed that the C4–D4 tone continuum elicited stronger categorical perception than the A4–B4 continuum, although the C4–D4 pair was previously found to be more difficult to distinguish in Experiment 1, suggesting a complex interaction between pitch perception and categorization processing. Together, these two experiments revealed the cognitive mechanism underlying musical pitch perception in ordinary populations and provided insights into future musical pitch training strategies. Full article
Show Figures

Figure 1

34 pages, 890 KiB  
Review
Wind Turbine Static Errors Related to Yaw, Pitch or Anemometer Apparatus: Guidelines for the Diagnosis and Related Performance Assessment
by Davide Astolfi, Silvia Iuliano, Antony Vasile, Marco Pasetti, Salvatore Dello Iacono and Alfredo Vaccaro
Energies 2024, 17(24), 6381; https://doi.org/10.3390/en17246381 - 18 Dec 2024
Cited by 1 | Viewed by 1315
Abstract
The optimization of the efficiency of wind turbine systems is a fundamental task, from the perspective of a growing share of electricity produced from wind. Despite this, and given the complex multivariate dependence of the power of wind turbines on environmental conditions and [...] Read more.
The optimization of the efficiency of wind turbine systems is a fundamental task, from the perspective of a growing share of electricity produced from wind. Despite this, and given the complex multivariate dependence of the power of wind turbines on environmental conditions and working parameters, the literature is lacking studies specifically devoted to a careful characterization of wind farm performance. In particular, in the literature, it is overlooked that there are several types of faults which have similar manifestations and that can be defined as static errors. This kind of error manifests as a static bias occurring from a certain time onward, which can affect the anemometer, the absolute or relative pitch of the blades, or the yaw system. Static or systematic errors typically do not cause the functional failure of the wind turbine system, but they deserve attention due to the fact that they cause power production loss throughout the operation time. Based on this, the first objective of the present study is a critical review of the recent papers devoted to three types of wind turbine static errors: anemometer bias, static yaw error, and pitch misalignment. As a result, a comprehensive viewpoint, enhancing the state of the art in the literature, is developed in this study. Given that the use of data collected by Supervisory Control And Data Acquisition (SCADA) systems has, up to now, been prevailing for the diagnosis of systematic errors compared to the use of further specific sensors, particular attention in the present study is thus devoted to the discussion of the phenomena which can be observable through SCADA data analysis. Based on this, finally, a rigorous work flow is formulated for detecting static errors and discriminating among them through SCADA data analysis. Nevertheless, methods based on additional information sources (like further sensors or meteorological data) are also discussed. An important aspect of this study is that, for each considered type of systematic error, some previously unpublished results based on real-world SCADA data are reported in order to corroborate the proposed framework. Summarizing, then, the present is the first paper which considers and discusses several types of wind turbine static errors in a unified viewpoint, correctly interprets apparently controversial results collected in the literature, and finally provides guidelines for the diagnosis of this kind of error and for the quantification of the performance drop associated with their presence. Full article
Show Figures

Figure 1

12 pages, 7286 KiB  
Article
Online Quality Control of Powder Bed Fusion with High-Resolution Eddy Current Testing Inductive Sensor Arrays
by Pedro Faria, Rodolfo L. Batalha, André Barrancos and Luís S. Rosado
Sensors 2024, 24(21), 6827; https://doi.org/10.3390/s24216827 - 24 Oct 2024
Cited by 4 | Viewed by 1333
Abstract
This paper presents the development of a novel eddy current array (ECA) system for real-time, layer-by-layer quality control in powder bed fusion (PBF) additive manufacturing. The system is integrated into the recoater of a PBF machine to provide spatially resolved electrical conductivity imaging [...] Read more.
This paper presents the development of a novel eddy current array (ECA) system for real-time, layer-by-layer quality control in powder bed fusion (PBF) additive manufacturing. The system is integrated into the recoater of a PBF machine to provide spatially resolved electrical conductivity imaging of the manufactured part. The system features an array of 40 inductive sensors spaced at 1 mm pitch and is capable of performing a full array readout every 0.192 mm at 100 mm/s recoater speed. Array scalability was achieved through the careful selection of the electromagnetic configuration, miniaturized and seamlessly integrated sensor elements, and the use of advanced mixed signal processing techniques. Experimental validation was performed on stainless steel 316L parts, successfully detecting metallic structures and confirming system performance in both laboratory and real-time PBF environments. The prototype achieved a signal-to-noise ratio (SNR) of 26.5 dB, discriminating metal from air and thus demonstrating its potential for improving PBF part design, process optimization, and defect detection. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

13 pages, 1162 KiB  
Article
Unveiling the Acoustic Signature of Collichthys lucidus: Insights from X-ray Morphometry-Informed Acoustic Modeling and Empirical Analysis
by Shuo Lyu, Chuhan Qiu, Minghua Xue, Zhenhong Zhu, Yue Qiu and Jianfeng Tong
Fishes 2024, 9(8), 304; https://doi.org/10.3390/fishes9080304 - 2 Aug 2024
Cited by 1 | Viewed by 1183
Abstract
Collichthys lucidus is an important small-scale economic fish species in the Yangtze River Estuary. To improve the accuracy of acoustic stock assessments for C. lucidus, it is necessary to accurately measure its target strength (TS). This study obtained precise morphological parameters of [...] Read more.
Collichthys lucidus is an important small-scale economic fish species in the Yangtze River Estuary. To improve the accuracy of acoustic stock assessments for C. lucidus, it is necessary to accurately measure its target strength (TS). This study obtained precise morphological parameters of C. lucidus through X-ray scanning and established a Kirchhoff ray mode (KRM) model to simulate the changes in TS of the fish body and swimbladder at different acoustic frequencies and pitch angles. At the same time, the TS was measured using the tethered method to analyze and compare the broadband scattering characteristics obtained from both methods. An empirical formula of C. lucidus relating TS to body length at two conventional frequencies was established using the least squares method. The results show that the C. lucidus TS changes, with body length ranging from 10.91 to 16.61 cm, are significantly influenced by the pitch angle at 70 kHz and 200 kHz frequencies, and the fluctuation of TS for both the fish body and swimbladder increases with the rise in frequency. The broadband TS values estimated by the KRM model and measured by the tethered method fluctuate within in the ranges from −45 dB to −55 dB and −40 dB to −55 dB, respectively. The TS of C. lucidus tends to increase with the increase in swimbladder length. When the probability density function of the pitch angle is N(−5°, 15°), the b20 measured by the KRM and the tethered method at 70 kHz are −71.94 dB and −69.21 dB, respectively, while at 200 kHz they are −72.58 dB and −70.55 dB. This study provides a scientific basis for future acoustic target discrimination and stock assessment of C. lucidus in the Yangtze River Estuary. Full article
(This article belongs to the Special Issue Technology for Fish and Fishery Monitoring)
Show Figures

Figure 1

13 pages, 1780 KiB  
Article
Benefits of Harmonicity for Hearing in Noise Are Limited to Detection and Pitch-Related Discrimination Tasks
by Neha Rajappa, Daniel R. Guest and Andrew J. Oxenham
Biology 2023, 12(12), 1522; https://doi.org/10.3390/biology12121522 - 13 Dec 2023
Cited by 2 | Viewed by 1687
Abstract
Harmonic complex tones are easier to detect in noise than inharmonic complex tones, providing a potential perceptual advantage in complex auditory environments. Here, we explored whether the harmonic advantage extends to other auditory tasks that are important for navigating a noisy auditory environment, [...] Read more.
Harmonic complex tones are easier to detect in noise than inharmonic complex tones, providing a potential perceptual advantage in complex auditory environments. Here, we explored whether the harmonic advantage extends to other auditory tasks that are important for navigating a noisy auditory environment, such as amplitude- and frequency-modulation detection. Sixty young normal-hearing listeners were tested, divided into two equal groups with and without musical training. Consistent with earlier studies, harmonic tones were easier to detect in noise than inharmonic tones, with a signal-to-noise ratio (SNR) advantage of about 2.5 dB, and the pitch discrimination of the harmonic tones was more accurate than that of inharmonic tones, even after differences in audibility were accounted for. In contrast, neither amplitude- nor frequency-modulation detection was superior with harmonic tones once differences in audibility were accounted for. Musical training was associated with better performance only in pitch-discrimination and frequency-modulation-detection tasks. The results confirm a detection and pitch-perception advantage for harmonic tones but reveal that the harmonic benefits do not extend to suprathreshold tasks that do not rely on extracting the fundamental frequency. A general theory is proposed that may account for the effects of both noise and memory on pitch-discrimination differences between harmonic and inharmonic tones. Full article
(This article belongs to the Special Issue Neural Correlates of Perception in Noise in the Auditory System)
Show Figures

Figure 1

10 pages, 1294 KiB  
Article
Advances in Clinical Voice Quality Analysis with VOXplot
by Ben Barsties v. Latoszek, Jörg Mayer, Christopher R. Watts and Bernhard Lehnert
J. Clin. Med. 2023, 12(14), 4644; https://doi.org/10.3390/jcm12144644 - 12 Jul 2023
Cited by 14 | Viewed by 3013
Abstract
Background: The assessment of voice quality can be evaluated perceptually with standard clinical practice, also including acoustic evaluation of digital voice recordings to validate and further interpret perceptual judgments. The goal of the present study was to determine the strongest acoustic voice quality [...] Read more.
Background: The assessment of voice quality can be evaluated perceptually with standard clinical practice, also including acoustic evaluation of digital voice recordings to validate and further interpret perceptual judgments. The goal of the present study was to determine the strongest acoustic voice quality parameters for perceived hoarseness and breathiness when analyzing the sustained vowel [a:] using a new clinical acoustic tool, the VOXplot software. Methods: A total of 218 voice samples of individuals with and without voice disorders were applied to perceptual and acoustic analyses. Overall, 13 single acoustic parameters were included to determine validity aspects in relation to perceptions of hoarseness and breathiness. Results: Four single acoustic measures could be clearly associated with perceptions of hoarseness or breathiness. For hoarseness, the harmonics-to-noise ratio (HNR) and pitch perturbation quotient with a smoothing factor of five periods (PPQ5), and, for breathiness, the smoothed cepstral peak prominence (CPPS) and the glottal-to-noise excitation ratio (GNE) were shown to be highly valid, with a significant difference being demonstrated for each of the other perceptual voice quality aspects. Conclusions: Two acoustic measures, the HNR and the PPQ5, were both strongly associated with perceptions of hoarseness and were able to discriminate hoarseness from breathiness with good confidence. Two other acoustic measures, the CPPS and the GNE, were both strongly associated with perceptions of breathiness and were able to discriminate breathiness from hoarseness with good confidence. Full article
(This article belongs to the Special Issue New Advances in the Management of Voice Disorders)
Show Figures

Figure 1

24 pages, 10865 KiB  
Article
LiDAR Odometry and Mapping Based on Neighborhood Information Constraints for Rugged Terrain
by Gang Wang, Xinyu Gao, Tongzhou Zhang, Qian Xu and Wei Zhou
Remote Sens. 2022, 14(20), 5229; https://doi.org/10.3390/rs14205229 - 19 Oct 2022
Cited by 3 | Viewed by 3151
Abstract
The simultaneous localization and mapping (SLAM) method estimates vehicles’ pose and builds maps established on the collection of environmental information primarily through sensors such as LiDAR and cameras. Compared to the camera-based SLAM, the LiDAR-based SLAM is more geared to complicated environments and [...] Read more.
The simultaneous localization and mapping (SLAM) method estimates vehicles’ pose and builds maps established on the collection of environmental information primarily through sensors such as LiDAR and cameras. Compared to the camera-based SLAM, the LiDAR-based SLAM is more geared to complicated environments and is not susceptible to weather and illumination, which has increasingly become a hot topic in autonomous driving. However, there has been relatively little research on the LiDAR-based SLAM algorithm in rugged scenes. The following two issues remain unsolved: on the one hand, the small overlap area of two adjacent point clouds results in insufficient valuable features that can be extracted; on the other hand, the conventional feature matching method does not take point cloud pitching into account, which frequently results in matching failure. Hence, a LiDAR SLAM algorithm based on neighborhood information constraints (LoNiC) for rugged terrain is proposed in this study. Firstly, we obtain the feature points with surface information using the distribution of the normal vector angles in the neighborhood and extract features with discrimination through the local surface information of the point cloud, to improve the describing ability of feature points in rugged scenes. Secondly, we provide a multi-scale constraint description based on point cloud curvature, normal vector angle, and Euclidean distance to enhance the algorithm’s discrimination of the differences between feature points and prevent mis-registration. Subsequently, in order to lessen the impact of the initial pose value on the precision of point cloud registration, we introduce the dynamic iteration factor to the registration process and modify the corresponding relationship of the matching point pairs by adjusting the distance and angle thresholds. Finally, the verification based on the KITTI and JLU campus datasets verifies that the proposed algorithm significantly improves the accuracy of mapping. Specifically in rugged scenes, the mean relative translation error is 0.0173%, and the mean relative rotation error is 2.8744°/m, reaching the current level of the state of the art (SOTA) method. Full article
Show Figures

Graphical abstract

21 pages, 2045 KiB  
Article
Chasing Flies: The Use of Wingbeat Frequency as a Communication Cue in Calyptrate Flies (Diptera: Calyptratae)
by Julie Pinto, Paola A. Magni, R. Christopher O’Brien and Ian R. Dadour
Insects 2022, 13(9), 822; https://doi.org/10.3390/insects13090822 - 9 Sep 2022
Cited by 3 | Viewed by 4523
Abstract
The incidental sound produced by the oscillation of insect wings during flight provides an opportunity for species identification. Calyptrate flies include some of the fastest and most agile flying insects, capable of rapid changes in direction and the fast pursuit of conspecifics. This [...] Read more.
The incidental sound produced by the oscillation of insect wings during flight provides an opportunity for species identification. Calyptrate flies include some of the fastest and most agile flying insects, capable of rapid changes in direction and the fast pursuit of conspecifics. This flight pattern makes the continuous and close recording of their wingbeat frequency difficult and limited to confined specimens. Advances in sound editor and analysis software, however, have made it possible to isolate low amplitude sounds using noise reduction and pitch detection algorithms. To explore differences in wingbeat frequency between genera and sex, 40 specimens of three-day old Sarcophaga crassipalpis, Lucilia sericata, Calliphora dubia, and Musca vetustissima were individually recorded in free flight in a temperature-controlled room. Results showed significant differences in wingbeat frequency between the four species and intersexual differences for each species. Discriminant analysis classifying the three carrion flies resulted in 77.5% classified correctly overall, with the correct classification of 82.5% of S. crassipalpis, 60% of C. dubia, and 90% of L. sericata, when both mean wingbeat frequency and sex were included. Intersexual differences were further demonstrated by male flies showing significantly higher variability than females in three of the species. These observed intergeneric and intersexual differences in wingbeat frequency start the discussion on the use of the metric as a communication signal by this taxon. The success of the methodology demonstrated differences at the genus level and encourages the recording of additional species and the use of wingbeat frequency as an identification tool for these flies. Full article
Show Figures

Figure 1

13 pages, 9710 KiB  
Article
Motor Influence in Developing Auditory Spatial Cognition in Hemiplegic Children with and without Visual Field Disorder
by Elena Aggius-Vella, Monica Gori, Claudio Campus, Stefania Petri and Francesca Tinelli
Children 2022, 9(7), 1055; https://doi.org/10.3390/children9071055 - 15 Jul 2022
Viewed by 2149
Abstract
Spatial representation is a crucial skill for everyday interaction with the environment. Different factors seem to influence spatial perception, such as body movements and vision. However, it is still unknown if motor impairment affects the building of simple spatial perception. To investigate this [...] Read more.
Spatial representation is a crucial skill for everyday interaction with the environment. Different factors seem to influence spatial perception, such as body movements and vision. However, it is still unknown if motor impairment affects the building of simple spatial perception. To investigate this point, we tested hemiplegic children with (HV) and without visual field (H) disorders in an auditory and visual-spatial localization and pitch discrimination task. Fifteen hemiplegic children (nine H and six HV) and twenty with typical development took part in the experiment. The tasks consisted in listening to a sound coming from a series of speakers positioned at the front or back of the subject. In one condition, subjects were asked to discriminate the pitch, while in the other, subjects had to localize the position of the sound. We also replicated the spatial task in a visual modality. Both groups of hemiplegic children performed worse in the auditory spatial localization task compared with the control, while no difference was found in the pitch discrimination task. For the visual-spatial localization task, only HV children differed from the two other groups. These results suggest that movement is important for the development of auditory spatial representation. Full article
(This article belongs to the Section Pediatric Neurology & Neurodevelopmental Disorders)
Show Figures

Figure 1

12 pages, 825 KiB  
Article
Head Pitch Angular Velocity Discriminates (Sub-)Acute Neck Pain Patients and Controls Assessed with the DidRen Laser Test
by Renaud Hage, Fabien Buisseret, Martin Houry and Frédéric Dierick
Sensors 2022, 22(7), 2805; https://doi.org/10.3390/s22072805 - 6 Apr 2022
Cited by 5 | Viewed by 3801
Abstract
Understanding neck pain is an important societal issue. Kinematic data from sensors may help to gain insight into the pathophysiological mechanisms associated with neck pain through a quantitative sensorimotor assessment of one patient. The objective of this study was to evaluate the potential [...] Read more.
Understanding neck pain is an important societal issue. Kinematic data from sensors may help to gain insight into the pathophysiological mechanisms associated with neck pain through a quantitative sensorimotor assessment of one patient. The objective of this study was to evaluate the potential usefulness of artificial intelligence with several machine learning (ML) algorithms in assessing neck sensorimotor performance. Angular velocity and acceleration measured by an inertial sensor placed on the forehead during the DidRen laser test in thirty-eight acute and subacute non-specific neck pain (ANSP) patients were compared to forty-two healthy control participants (HCP). Seven supervised ML algorithms were chosen for the predictions. The most informative kinematic features were computed using Sequential Feature Selection methods. The best performing algorithm is the Linear Support Vector Machine with an accuracy of 82% and Area Under Curve of 84%. The best discriminative kinematic feature between ANSP patients and HCP is the first quartile of head pitch angular velocity. This study has shown that supervised ML algorithms could be used to classify ANSP patients and identify discriminatory kinematic features potentially useful for clinicians in the assessment and monitoring of the neck sensorimotor performance in ANSP patients. Full article
(This article belongs to the Special Issue Wearable Sensors Applied in Movement Analysis)
Show Figures

Figure 1

Back to TopTop