Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (102)

Search Parameters:
Keywords = statistical outlier removal

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 275 KiB  
Article
Distinguishing Dyslexia, Attention Deficit, and Learning Disorders: Insights from AI and Eye Movements
by Alae Eddine El Hmimdi and Zoï Kapoula
Bioengineering 2025, 12(7), 737; https://doi.org/10.3390/bioengineering12070737 - 5 Jul 2025
Viewed by 432
Abstract
This study investigates whether eye movement abnormalities can differentiate between distinct clinical annotations of dyslexia, attention deficit, or school learning difficulties in children. Utilizing a selection of saccade and vergence eye movement data from a large clinical dataset recorded across 20 European centers [...] Read more.
This study investigates whether eye movement abnormalities can differentiate between distinct clinical annotations of dyslexia, attention deficit, or school learning difficulties in children. Utilizing a selection of saccade and vergence eye movement data from a large clinical dataset recorded across 20 European centers using the REMOBI and AIDEAL technologies, this research study focuses on individuals annotated with only one of the three annotations. The selected dataset includes 355 individuals for saccade tests and 454 for vergence tasks. Eye movement analysis was performed with AIDEAL software. Key parameters, such as amplitude, latency, duration, and velocity, are extracted and processed to remove outliers and standardize values. Machine learning models, including logistic regression, random forest, support vector machines, and neural networks, are trained using a GroupKFold strategy to ensure patient data are present in either the training or test set. Results from the machine learning models revealed that children annotated solely with dyslexia could be successfully identified based on their saccade and vergence eye movements, while identification of the other two categories was less distinct. Statistical evaluation using the Kruskal–Wallis test highlighted significant group mean differences in several saccade parameters, such as a velocity and latency, particularly for dyslexics relative to the other two groups. These findings suggest that specific terminology, such as “dyslexia”, may capture unique eye movement patterns, underscoring the importance of eye movement analysis as a diagnostic tool for understanding the complexity of these conditions. This study emphasizes the potential of eye movement analysis in refining diagnostic precision and capturing the nuanced differences between dyslexia, attention deficits, and general learning difficulties. Full article
Show Figures

Figure A1

25 pages, 9860 KiB  
Article
Indoor Dynamic Environment Mapping Based on Semantic Fusion and Hierarchical Filtering
by Yiming Li, Luying Na, Xianpu Liang and Qi An
ISPRS Int. J. Geo-Inf. 2025, 14(7), 236; https://doi.org/10.3390/ijgi14070236 - 21 Jun 2025
Viewed by 680
Abstract
To address the challenges of dynamic object interference and redundant information representation in map construction for indoor dynamic environments, this paper proposes an indoor dynamic environment mapping method based on semantic fusion and hierarchical filtering. First, prior dynamic object masks are obtained using [...] Read more.
To address the challenges of dynamic object interference and redundant information representation in map construction for indoor dynamic environments, this paper proposes an indoor dynamic environment mapping method based on semantic fusion and hierarchical filtering. First, prior dynamic object masks are obtained using the YOLOv8 model, and geometric constraints between prior static objects and dynamic regions are introduced to identify non-prior dynamic objects, thereby eliminating all dynamic features (both prior and non-prior). Second, an initial semantic point cloud map is constructed by integrating prior static features from a semantic segmentation network with pose estimates from an RGB-D camera. Dynamic noise is then removed using statistical outlier removal (SOR) filtering, while voxel filtering optimizes point cloud density, generating a compact yet texture-rich semantic dense point cloud map with minimal dynamic artifacts. Subsequently, a multi-resolution semantic octree map is built using a recursive spatial partitioning algorithm. Finally, point cloud poses are corrected via Transform Frame (TF) transformation, and a 2D traversability grid map is generated using passthrough filtering and grid projection. Experimental results demonstrate that the proposed method constructs multi-level semantic maps with rich information, clear structure, and high reliability in indoor dynamic scenarios. Additionally, the map file size is compressed by 50–80%, significantly enhancing the reliability of mobile robot navigation and the efficiency of path planning. Full article
(This article belongs to the Special Issue Indoor Mobile Mapping and Location-Based Knowledge Services)
Show Figures

Figure 1

21 pages, 8909 KiB  
Article
A Methodology for Acceleration Signals Segmentation During Forming Regular Reliefs Patterns on Planar Surfaces by Ball Burnishing Operation
by Stoyan Dimitrov Slavov and Georgi Venelinov Valchev
J. Manuf. Mater. Process. 2025, 9(6), 181; https://doi.org/10.3390/jmmp9060181 - 29 May 2025
Viewed by 588
Abstract
In the present study, an approach for determining the different states of ball burnishing (BB) operations aimed at forming regular reliefs’ patterns on planar surfaces is introduced. The methodology involves acquiring multi-axis accelerometer data from CNC-driven milling machine to capture the dynamics of [...] Read more.
In the present study, an approach for determining the different states of ball burnishing (BB) operations aimed at forming regular reliefs’ patterns on planar surfaces is introduced. The methodology involves acquiring multi-axis accelerometer data from CNC-driven milling machine to capture the dynamics of the BB tool and workpiece, mounted on the machine table. Following data acquisition from an AISI 304 stainless steel workpiece, which is subjected to BB treatments at different toolpaths and feed rates, the recorded signals are preprocessed through noise reduction techniques, DC component removal, and outlier correction. The refined data are then transformed using a root mean square (RMS) operation to simplify further analysis. A Gaussian Mixture Model (GMM) is subsequently employed to decompose the compressed RMS signal into distinct components corresponding to various operational states during BB. The experimental trials at feed rates of 500 and 1000 mm/min reveal that increased feed rates enhance the distinguishability of these states, thus leading to an augmented number of statistically significant components. The results obtained from the proposed GMM based algorithm applied on compressed RMS accelerations signals is compared with two other methods, i.e., Short-Time Fourier Transforms and Continuous Wavelet Transform. The results from the comparison show that the proposed GMM method has the advantage of segmenting three to five different states of the BB-process from nonstationary accelerations signals measured, while the other tested methods are capable only to distinguish the state of work of the deforming tool and state of its rapid (re-)positioning between the areas of working, when there is no contact between the BB-tool and workpiece. Full article
Show Figures

Figure 1

18 pages, 1131 KiB  
Article
Analyzing and Predicting the Agronomic Effectiveness of Fertilizers Derived from Food Waste Using Data-Driven Models
by Ksawery Kuligowski, Quoc Ba Tran, Chinh Chien Nguyen, Piotr Kaczyński, Izabela Konkol, Lesław Świerczek, Adam Cenian and Xuan Cuong Nguyen
Appl. Sci. 2025, 15(11), 5999; https://doi.org/10.3390/app15115999 - 26 May 2025
Viewed by 611
Abstract
This study evaluates and estimates the agronomic effectiveness of food waste-derived fertilizers by analyzing plant yield and the internal efficiency of nitrogen utilization (IENU) via statistical and machine learning models. A dataset of 448 cases from various food waste treatments gathered from our [...] Read more.
This study evaluates and estimates the agronomic effectiveness of food waste-derived fertilizers by analyzing plant yield and the internal efficiency of nitrogen utilization (IENU) via statistical and machine learning models. A dataset of 448 cases from various food waste treatments gathered from our experiments and the existing literature was analyzed. Plant yield and IENU exhibited substantial variability, averaging 2268 ± 3099 kg/ha and 32.3 ± 92.5 kg N/ha, respectively. Ryegrass dominated (73.77%), followed by unspecified grass (10.76%), oats (4.87%), and lettuce (2.02%). Correlation analysis revealed that decomposition duration positively influenced plant yield and IENU (r = 0.42 and 0.44), while temperature and volatile solids had negative correlations. Machine learning models outperformed linear regression in predicting plant yield and IENU, especially after preprocessing to remove missing values and outliers. Random Forest and Cubist models showed strong generalization with high R2 (0.79–0.83) for plant yield, while Cubist predicted IENU well in testing, with RMSE = 3.83 and R2 = 0.78. These findings highlight machine learning’s ability to analyze complex datasets, improve agricultural decision-making, and optimize food waste utilization. Full article
Show Figures

Figure 1

22 pages, 4711 KiB  
Article
Research on Missing Data Estimation Method for UPFC Submodules Based on Bayesian Multiple Imputation and Support Vector Machines
by Xiaoming Yu, Jun Wang, Ke Zhang, Zhijun Chen, Ming Tong, Sibo Sun, Jiapeng Shen, Li Zhang and Chuyang Wang
Energies 2025, 18(10), 2535; https://doi.org/10.3390/en18102535 - 14 May 2025
Viewed by 397
Abstract
With the increasing complexity of power systems, the monitoring data of UPFC submodules suffers from high missing rates due to sensor failures and environmental interference, significantly limiting equipment condition assessment and fault warning capabilities. To overcome the computational complexity, poor real-time performance, and [...] Read more.
With the increasing complexity of power systems, the monitoring data of UPFC submodules suffers from high missing rates due to sensor failures and environmental interference, significantly limiting equipment condition assessment and fault warning capabilities. To overcome the computational complexity, poor real-time performance, and limited generalization of existing methods like GRU-GAN and SOM-LSTM, this study proposes a hybrid framework combining Bayesian multiple imputation with a Support Vector Machine (SVM) for data repair. The framework first employs an adaptive Kalman filter to denoise raw data and remove outliers, followed by Bayesian multiple imputation that constructs posterior distributions using normal linear correlations between historical and operational data, generating optimized imputed values through arithmetic averaging. A kernel-based SVM with RBF and soft margin optimization is then applied for nonlinear calibration to enhance robustness and consistency in high-dimensional scenarios. Experimental validation focusing on capacitor voltage, current, and temperature parameters of UPFC submodules under a 50% missing data scenario demonstrates that the proposed method achieves an 18.7% average error reduction and approximately 30% computational efficiency improvement compared to single imputation and traditional multiple imputation approaches, significantly outperforming neural network models. This study confirms the effectiveness of integrating Bayesian statistics with machine learning for power data restoration, providing a high-precision and low-complexity solution for equipment condition monitoring in complex operational environments. Future research will explore dynamic weight optimization and extend the framework to multi-source heterogeneous data applications. Full article
(This article belongs to the Special Issue Reliability of Power Electronics Devices and Converter Systems)
Show Figures

Figure 1

23 pages, 384 KiB  
Article
Robust Method for Confidence Interval Estimation in Outlier-Prone Datasets: Application to Molecular and Biophysical Data
by Victor V. Golovko
Biomolecules 2025, 15(5), 704; https://doi.org/10.3390/biom15050704 - 12 May 2025
Viewed by 810
Abstract
Estimating confidence intervals in small or noisy datasets is a recurring challenge in biomolecular research, particularly when data contain outliers or exhibit high variability. This study introduces a robust statistical method that combines a hybrid bootstrap procedure with Steiner’s most frequent value (MFV) [...] Read more.
Estimating confidence intervals in small or noisy datasets is a recurring challenge in biomolecular research, particularly when data contain outliers or exhibit high variability. This study introduces a robust statistical method that combines a hybrid bootstrap procedure with Steiner’s most frequent value (MFV) approach to estimate confidence intervals without removing outliers or altering the original dataset. The MFV technique identifies the most representative value while minimizing information loss, making it well suited for datasets with limited sample sizes or non-Gaussian distributions. To demonstrate the method’s robustness, we intentionally selected a dataset from outside the biomolecular domain: a fast-neutron activation cross-section of the 109Ag(n, 2n)108mAg reaction from nuclear physics. This dataset presents large uncertainties, inconsistencies, and known evaluation difficulties. Confidence intervals for the cross-section were determined using a method called the MFV–hybrid parametric bootstrapping (MFV-HPB) framework. In this approach, the original data points were repeatedly resampled, and new values were simulated based on their uncertainties before the MFV was calculated. Despite the dataset’s complexity, the method yielded a stable MFV estimate of 709 mb with a 68.27% confidence interval of [691, 744] mb, illustrating the method’s ability to provide interpretable results in challenging scenarios. Although the example is from nuclear science, the same statistical issues commonly arise in biomolecular fields, such as enzymatic kinetics, molecular assays, and diagnostic biomarker studies. The MFV-HPB framework provides a reliable and generalizable approach for extracting central estimates and confidence intervals in situations where data are difficult to collect, replicate, or interpret. Its resilience to outliers, independence from distributional assumptions, and compatibility with small-sample scenarios make it particularly valuable in molecular medicine, bioengineering, and biophysics. Full article
(This article belongs to the Topic Bioinformatics in Drug Design and Discovery—2nd Edition)
Show Figures

Figure 1

15 pages, 6167 KiB  
Article
Comparison of Sensors for Air Quality Monitoring with Reference Methods in Zagreb, Croatia
by Silvije Davila, Marija Jelena Lovrić Štefiček, Ivan Bešlić, Gordana Pehnec, Marko Marić and Ivana Hrga
Atmosphere 2025, 16(4), 472; https://doi.org/10.3390/atmos16040472 - 18 Apr 2025
Viewed by 493
Abstract
Within the scope of “Eco Map of Zagreb” project, eight sensor sets (type AQMeshPod) were set up at an automatic measuring station at the Institute for Medical Research and Occupational Health (IMROH) for comparison with reference methods for air quality measurement during 2018. [...] Read more.
Within the scope of “Eco Map of Zagreb” project, eight sensor sets (type AQMeshPod) were set up at an automatic measuring station at the Institute for Medical Research and Occupational Health (IMROH) for comparison with reference methods for air quality measurement during 2018. This station is a city background station within the Zagreb network for air quality monitoring, where measurements of SO2, CO, NO2, O3, PM10 and PM2.5, are performed using standardized methods accredited according to EN ISO/IEC 17025. This paper presents a comparison of pollutant mass concentrations determined by sensors with reference methods. The data were compared and filtered to remove outliers and handle deviations between the results obtained by sensors and reference methods, considering the different approaches to gas and PM data. A comparison of sensor results with the reference methods showed a large scattering of all gaseous pollutants while the comparison for PM10 and PM2.5 indicated a satisfactory low dispersion. The results of a regression analysis showed a significant seasonal dependence for all pollutants. Significant statistical differences between the reference methods and sensors for the whole year and in all seasons for all gas pollutants, as well as for PM10, were observed, while for PM2.5 statistical significance showed varying results. Full article
(This article belongs to the Special Issue Feature Papers in Atmospheric Techniques, Instruments, and Modeling)
Show Figures

Figure 1

19 pages, 3872 KiB  
Article
GNSS-Based Monitoring Methods for Mining Headframes
by Xu Yang, Zhe Zhou, Yanzhao Yang, Xinxin Yao, Chao Liu, Lei Liu and Shicheng Xie
Appl. Sci. 2025, 15(8), 4368; https://doi.org/10.3390/app15084368 - 15 Apr 2025
Viewed by 431
Abstract
This study introduces an innovative GNSS-based monitoring system designed to evaluate deformation in mining headframes, effectively addressing the limitations of traditional methods, such as inadequate real-time capabilities and complex data processing requirements. The research was conducted at the Liuzhuang Mine in Anhui Province, [...] Read more.
This study introduces an innovative GNSS-based monitoring system designed to evaluate deformation in mining headframes, effectively addressing the limitations of traditional methods, such as inadequate real-time capabilities and complex data processing requirements. The research was conducted at the Liuzhuang Mine in Anhui Province, China, where a monitoring network was established, consisting of one reference station and eight GNSS stations strategically positioned on sheave platforms and structural supports. Over a period of 66 days, high-frequency 3D deformation data were collected and processed using advanced methodologies, including cubic spline interpolation, generalized extreme studentized deviate (GESD) outlier removal, and Gaussian filtering. Spatiotemporal analysis, employing the “base state with amendments” model, indicated that 90% of the deformations (ΔX, ΔY, ΔH) were confined within ±8 mm, with more significant fluctuations observed near the sheave wheels due to mechanical stress. Correlation analysis identified the distance to the sheave wheel as the primary factor influencing horizontal deformation, with Pearson correlation coefficients exceeding 0.67, while vertical settlement remained stable. Risk thresholds, derived from statistical fluctuations, demonstrated that 99.2% of the data fell within safe limits during validation. In comparison to traditional approaches, the GNSS system delivers enhanced precision, real-time functionality, and a decreased field workload. This study presents a scalable framework for assessing headframe safety and guides the optimization of sensor placement in analogous mining settings. It is proposed that future integration with multi-source sensors, such as inertial navigation systems, will further augment monitoring robustness. Full article
Show Figures

Figure 1

18 pages, 12759 KiB  
Article
Validation of Inland Water Surface Elevation from SWOT Satellite Products: A Case Study in the Middle and Lower Reaches of the Yangtze River
by Yao Zhao, Jun’e Fu, Zhiguo Pang, Wei Jiang, Pengjie Zhang and Zixuan Qi
Remote Sens. 2025, 17(8), 1330; https://doi.org/10.3390/rs17081330 - 8 Apr 2025
Cited by 2 | Viewed by 1772
Abstract
The Surface Water and Ocean Topography (SWOT) satellite mission, jointly developed by NASA and several international collaboration agencies, aims to achieve high-resolution two-dimensional observations of global surface water. Equipped with the advanced Ka-band radar interferometer (KaRIn), it significantly enhances the ability to monitor [...] Read more.
The Surface Water and Ocean Topography (SWOT) satellite mission, jointly developed by NASA and several international collaboration agencies, aims to achieve high-resolution two-dimensional observations of global surface water. Equipped with the advanced Ka-band radar interferometer (KaRIn), it significantly enhances the ability to monitor surface water and provides a new data source for obtaining large-scale water surface elevation (WSE) data at high temporal and spatial resolution. However, the accuracy and applicability of its scientific data products for inland water bodies still require validation. This study obtained three scientific data products from the SWOT satellite between August 2023 and December 2024: the Level 2 KaRIn high-rate river single-pass vector product (L2_HR_RiverSP), the Level 2 KaRIn high-rate lake single-pass vector product (L2_HR_LakeSP), and the Level 2 KaRIn high-rate water mask pixel cloud product (L2_HR_PIXC). These were compared with in situ water level data to validate their accuracy in retrieving inland water levels across eight different regions in the middle and lower reaches of the Yangtze River (MLRYR) and to evaluate the applicability of each product. The experimental results show the following: (1) The inversion accuracy of L2_HR_RiverSP and L2_HR_LakeSP varies significantly across different regions. In some areas, the extracted WSE aligns closely with the in situ water level trend, with a coefficient of determination (R2) exceeding 0.9, while in other areas, the R2 is lower (less than 0.8), and the error compared to in situ water levels is larger (with Root Mean Square Error (RMSE) greater than 1.0 m). (2) This study proposes a combined denoising method based on the Interquartile Range (IQR) and Adaptive Statistical Outlier Removal (ASOR). Compared to the L2_HR_RiverSP and L2_HR_LakeSP products, the L2_HR_PIXC product, after denoising, shows significant improvements in all accuracy metrics for water level inversion, with R2 greater than 0.85, Mean Absolute Error (MAE) less than 0.4 m, and RMSE less than 0.5 m. Overall, the SWOT satellite demonstrates the capability to monitor inland water bodies with high precision, especially through the L2_HR_PIXC product, which shows broader application potential and will play an important role in global water dynamics monitoring and refined water resource management research. Full article
Show Figures

Figure 1

14 pages, 5344 KiB  
Article
A Novel Two-Stage Superpixel CFAR Method Based on Truncated KDE Model for Target Detection in SAR Images
by Si Li, Hangcheng Wei, Yunlong Mao and Jiageng Fan
Electronics 2025, 14(7), 1327; https://doi.org/10.3390/electronics14071327 - 27 Mar 2025
Viewed by 451
Abstract
Target detection in synthetic aperture radar (SAR) imagery remains a significant technical challenge, particularly in scenarios involving multi-target interference and clutter edge effects that cannot be disregarded, notably in high-resolution imaging applications. To tackle this issue, a novel two-stage superpixel-level constant false-alarm rate [...] Read more.
Target detection in synthetic aperture radar (SAR) imagery remains a significant technical challenge, particularly in scenarios involving multi-target interference and clutter edge effects that cannot be disregarded, notably in high-resolution imaging applications. To tackle this issue, a novel two-stage superpixel-level constant false-alarm rate (CFAR) detection method based on a truncated kernel density estimation (KDE) model is proposed in this article. The contribution mainly lies in three aspects. First, a truncated KDE model is used to fit the statistical distribution of clutter in the detection window, and adaptive thresholding is used for clutter truncation to remove outliers from the clutter samples while preserving the real clutter. Second, based on the clutter statistics, the KDE model is accurately constructed using the quartile based on the truncated clutter statistics. Third, target superpixel detection is performed using a two-stage CFAR detection scheme enhanced with local contrast measure (LCM), consisting of a global stage followed by a local stage. In the global detection phase, we identify candidate target superpixels (CTSs) based on the superpixel segmentation results. In the local detection phase, a local CFAR detector using a truncated KDE model is employed to improve the detection process, and further screening is performed on the global detection results combined with local contrast. Experimental results show that the proposed method achieves excellent detection performance, while significantly reducing detection time compared to current popular methods. Full article
Show Figures

Figure 1

19 pages, 3711 KiB  
Article
A Novel Methodology to Correct Chlorophyll-a Concentrations from Satellite Data and Assess Credible Phenological Patterns
by Irene Biliani, Ekaterini Skamnia, Polychronis Economou and Ierotheos Zacharias
Remote Sens. 2025, 17(7), 1156; https://doi.org/10.3390/rs17071156 - 25 Mar 2025
Viewed by 764
Abstract
Remote sensing data play a crucial role in capturing and evaluating eutrophication, providing a comprehensive view of spatial and temporal variations in water quality parameters. Chlorophyll-a concentration time series analysis aids in understanding the current trophic state of coastal waters and tracking changes [...] Read more.
Remote sensing data play a crucial role in capturing and evaluating eutrophication, providing a comprehensive view of spatial and temporal variations in water quality parameters. Chlorophyll-a concentration time series analysis aids in understanding the current trophic state of coastal waters and tracking changes over time, enabling the evaluation of water bodies’ trophic status. This research presents a novel and replicable methodology able to derive accurate phenological patterns using remote sensing data. The methodology proposed uses the two-decade MODIS-Aqua surface reflectance dataset, analyzing data from 30-point stations and calculating chlorophyll-a concentrations from NASA’s Ocean Color algorithm. Then, a correction process is implemented through a robust, simple statistical analysis by applying LOESS smoothing to detect and remove outliers from the extensive dataset. Different scenarios are reviewed and compared with field data to calibrate the proposed methodology accurately. The results demonstrate the methodology’s capacity to produce consistent chlorophyll-a time series and to present phenological patterns that can effectively identify key indicators and trends, resulting in valuable insights into the coastal body’s trophic state. The case study of the Ambracian Gulf is characterized as hypertrophic since algal bloom during August reaches up to 5 mg/m3, while the replicate case study of Aitoliko shows algal bloom reaching up to 2.5 mg/m3. Finally, the proposed methodology successfully identifies the positive chlorophyll-a climate tendencies of the two selected Greek water bodies. This study highlights the value of integrating statistical methods with remote sensing data for accurate, long-term monitoring of water quality in aquatic ecosystems. Full article
Show Figures

Figure 1

24 pages, 953 KiB  
Article
Sequential Clustering Phases for Environmental Noise Level Monitoring on a Mobile Crowd Sourcing/Sensing Platform
by Fawaz Alhazemi
Sensors 2025, 25(5), 1601; https://doi.org/10.3390/s25051601 - 5 Mar 2025
Cited by 1 | Viewed by 689
Abstract
Using mobile crowd sourcing/sensing (MCS) noise monitoring can lead to false sound level reporting. The methods used for recruiting mobile phones in an area of interest vary from selecting full populations to randomly selecting a single phone. Other methods apply a clustering algorithm [...] Read more.
Using mobile crowd sourcing/sensing (MCS) noise monitoring can lead to false sound level reporting. The methods used for recruiting mobile phones in an area of interest vary from selecting full populations to randomly selecting a single phone. Other methods apply a clustering algorithm based on spatial or noise parameters to recruit mobile phones to MCS platforms. However, statistical t tests have revealed dissimilarities between these selection methods. In this paper, we assign these dissimilarities to (1) acoustic characteristics and (2) outlier mobile phones affecting the noise level. We propose two clustering phases for noise level monitoring in MCS platforms. The approach starts by applying spatial clustering to form focused clusters and removing spatial outliers. Then, noise level clustering is applied to eliminate noise level outliers. This creates subsets of mobile phones that are used to calculate the noise level. We conducted a real-world experiment with 25 mobile phones and performed a statistical t test evaluation of the selection methodologies. The statistical values indicated dissimilarities. Then, we compared our proposed method with the noise level clustering method in terms of properly detecting and eliminating outliers. Our method offers 4% to 12% higher performance than the noise clustering method. Full article
(This article belongs to the Special Issue Mobile Sensing for Smart Cities)
Show Figures

Figure 1

19 pages, 2560 KiB  
Article
Evaluation of Rapeseed Leave Segmentation Accuracy Using Binocular Stereo Vision 3D Point Clouds
by Lili Zhang, Shuangyue Shi, Muhammad Zain, Binqian Sun, Dongwei Han and Chengming Sun
Agronomy 2025, 15(1), 245; https://doi.org/10.3390/agronomy15010245 - 20 Jan 2025
Cited by 2 | Viewed by 1201
Abstract
Point cloud segmentation is necessary for obtaining highly precise morphological traits in plant phenotyping. Although a huge development has occurred in point cloud segmentation, the segmentation of point clouds from complex plant leaves still remains challenging. Rapeseed leaves are critical in cultivation and [...] Read more.
Point cloud segmentation is necessary for obtaining highly precise morphological traits in plant phenotyping. Although a huge development has occurred in point cloud segmentation, the segmentation of point clouds from complex plant leaves still remains challenging. Rapeseed leaves are critical in cultivation and breeding, yet traditional two-dimensional imaging is susceptible to reduced segmentation accuracy due to occlusions between plants. The current study proposes the use of binocular stereo-vision technology to obtain three-dimensional (3D) point clouds of rapeseed leaves at the seedling and bolting stages. The point clouds were colorized based on elevation values in order to better process the 3D point cloud data and extract rapeseed phenotypic parameters. Denoising methods were selected based on the source and classification of point cloud noise. However, for ground point clouds, we combined plane fitting with pass-through filtering for denoising, while statistical filtering was used for denoising outliers generated during scanning. We found that, during the seedling stage of rapeseed, a region-growing segmentation method was helpful in finding suitable parameter thresholds for leaf segmentation, and the Locally Convex Connected Patches (LCCP) clustering method was used for leaf segmentation at the bolting stage. Furthermore, the study results show that combining plane fitting with pass-through filtering effectively removes the ground point cloud noise, while statistical filtering successfully denoises outlier noise points generated during scanning. Finally, using the region-growing algorithm during the seedling stage with a normal angle threshold set at 5.0/180.0* M_PI and a curvature threshold set at 1.5 helps to avoid the under-segmentation and over-segmentation issues, achieving complete segmentation of rapeseed seedling leaves, while the LCCP clustering method fully segments rapeseed leaves at the bolting stage. The proposed method provides insights to improve the accuracy of subsequent point cloud phenotypic parameter extraction, such as rapeseed leaf area, and is beneficial for the 3D reconstruction of rapeseed. Full article
(This article belongs to the Special Issue Unmanned Farms in Smart Agriculture)
Show Figures

Figure 1

25 pages, 17064 KiB  
Article
An Environment Recognition Algorithm for Staircase Climbing Robots
by Yanjie Liu, Yanlong Wei, Chao Wang and Heng Wu
Remote Sens. 2024, 16(24), 4718; https://doi.org/10.3390/rs16244718 - 17 Dec 2024
Viewed by 1388
Abstract
For deformed wheel-based staircase-climbing robots, the accuracy of staircase step geometry perception and scene mapping are critical factors in determining whether the robot can successfully ascend the stairs and continue its task. Currently, while there are LiDAR-based algorithms that focus either on step [...] Read more.
For deformed wheel-based staircase-climbing robots, the accuracy of staircase step geometry perception and scene mapping are critical factors in determining whether the robot can successfully ascend the stairs and continue its task. Currently, while there are LiDAR-based algorithms that focus either on step geometry detection or scene mapping, few comprehensive algorithms exist that address both step geometry perception and scene mapping for staircases. Moreover, significant errors in step geometry estimation and low mapping accuracy can hinder the ability of deformed wheel-based mobile robots to climb stairs, negatively impacting the efficiency and success rate of task execution. To solve the above problems, we propose an effective LiDAR-Inertial-based point cloud detection method for staircases. Firstly, we preprocess the staircase point cloud, mainly using the Statistical Outlier Removal algorithm to effectively remove the outliers in the staircase scene and combine the vertical angular resolution and spatial geometric relationship of LiDAR to realize the ground segmentation in the staircase scene. Then, we perform post-processing based on the point cloud map obtained from LiDAR SLAM, extract the staircase point cloud and project and fit the staircase point cloud by Ceres optimizer, and solve the dimensional information such as depth and height of the staircase by combining with the mean filtering method. Finally, we fully validate the effectiveness of the method proposed in this paper by conducting multiple sets of SLAM and size detection experiments in real different staircase scenarios. Full article
(This article belongs to the Special Issue Advanced AI Technology in Remote Sensing)
Show Figures

Figure 1

35 pages, 14662 KiB  
Article
A Statistical Approach for Characterizing the Behaviour of Roughness Parameters Measured by a Multi-Physics Instrument on Ground Surface Topographies: Four Novel Indicators
by Clément Moreau, Julie Lemesle, David Páez Margarit, François Blateyron and Maxence Bigerelle
Metrology 2024, 4(4), 640-672; https://doi.org/10.3390/metrology4040039 - 18 Nov 2024
Cited by 1 | Viewed by 2154
Abstract
With a view to improve measurements, this paper presents a statistical approach for characterizing the behaviour of roughness parameters based on measurements performed on ground surface topographies (grit #080/#120). A S neoxTM (Sensofar®, Terrassa, Spain), equipped with three optical instrument [...] Read more.
With a view to improve measurements, this paper presents a statistical approach for characterizing the behaviour of roughness parameters based on measurements performed on ground surface topographies (grit #080/#120). A S neoxTM (Sensofar®, Terrassa, Spain), equipped with three optical instrument modes (Focus Variation (FV), Coherence Scanning Interferometry (CSI), and Confocal Microscopy (CM)), is used according to a specific measurement plan, called Morphomeca Monitoring, including topography representativeness and several time-based measurements. Previously applied to the Sa parameter, the statistical approach based here solely on the Quality Index (QI) has now been extended to a multi-parameter approach. Firstly, the study focuses on detecting and explaining parameter disturbances in raw data by identifying and quantifying outliers of the parameter’s values, as a new first indicator. This allows us to draw parallels between these outliers and the surface topography, providing reflection tracks. Secondly, the statistical approach is applied to highlight disturbed parameters concerning the instrument mode used and the concerned grit level with two other indicators computed from QI, named homogeneity and number of modes. The applied method shows that a cleaning of the data containing the parameters values is necessary to remove outlier values, and a set of roughness parameters could be determined according to the assessment of the indicators. The final aim is to provide a set of parameters which best describe the measurement conditions based on monitoring data, statistical indexes, and surface topographies. It is shown that the parameters Sal, Sz and Sci are the most reliable roughness parameters, unlike Sdq and S5p, which appear as the most unstable parameters. More globally, the volume roughness parameters appear as the most stable, differing from the form parameters. This investigated point of view offers thus a complementary framework for improving measurement processes. In addition, this method aims to provide a global and more generalizable alternative than traditional methods of uncertainty calculation, based on a thorough analysis of multi-parameter and statistical indexes. Full article
(This article belongs to the Special Issue Advances in Optical 3D Metrology)
Show Figures

Figure 1

Back to TopTop