Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (111)

Search Parameters:
Keywords = Euclidean distance field

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 2287 KiB  
Article
Quantitative Measurement of Hakka Phonetic Distances
by I-Ping Wan
Languages 2025, 10(8), 185; https://doi.org/10.3390/languages10080185 - 29 Jul 2025
Viewed by 186
Abstract
This study proposes a novel approach to measuring phonetic distances among six Hailu Hakka vowels ([i, e, ɨ, a, u, o]) by applying Euclidean distance-based calculations from both articulatory and acoustic perspectives. By analyzing articulatory feature values and acoustic formant structures, vowel distances [...] Read more.
This study proposes a novel approach to measuring phonetic distances among six Hailu Hakka vowels ([i, e, ɨ, a, u, o]) by applying Euclidean distance-based calculations from both articulatory and acoustic perspectives. By analyzing articulatory feature values and acoustic formant structures, vowel distances are systematically represented through linear vector arrangements. These measurements address ongoing debates regarding the central positioning of [ɨ], specifically whether it aligns more closely with front or back vowels and whether [a] or [ɑ] more accurately represents vowel articulation. This study also reassesses the validity of prior acoustic findings on Hailu Hakka vowels and evaluates the correspondence between articulatory normalization and acoustic formant-based models. Through the integration of articulatory and acoustic data, this research advances a replicable and theoretically grounded method for quantitative vowel analysis. The results not only refine phonetic classification within a Euclidean framework but also help resolve transcription inconsistencies in phonetic distance matrices. This study contributes to the growing field of quantitative phonetics by offering a systematic, multidimensional model applicable to both theoretical and experimental investigations of Taiwan Hailu Hakka. Full article
Show Figures

Figure 1

19 pages, 5198 KiB  
Article
Research on a Fault Diagnosis Method for Rolling Bearings Based on the Fusion of PSR-CRP and DenseNet
by Beining Cui, Zhaobin Tan, Yuhang Gao, Xinyu Wang and Lv Xiao
Processes 2025, 13(8), 2372; https://doi.org/10.3390/pr13082372 - 25 Jul 2025
Viewed by 382
Abstract
To address the challenges of unstable vibration signals, indistinct fault features, and difficulties in feature extraction during rolling bearing operation, this paper presents a novel fault diagnosis method based on the fusion of PSR-CRP and DenseNet. The Phase Space Reconstruction (PSR) method transforms [...] Read more.
To address the challenges of unstable vibration signals, indistinct fault features, and difficulties in feature extraction during rolling bearing operation, this paper presents a novel fault diagnosis method based on the fusion of PSR-CRP and DenseNet. The Phase Space Reconstruction (PSR) method transforms one-dimensional bearing vibration data into a three-dimensional space. Euclidean distances between phase points are calculated and mapped into a Color Recurrence Plot (CRP) to represent the bearings’ operational state. This approach effectively reduces feature extraction ambiguity compared to RP, GAF, and MTF methods. Fault features are extracted and classified using DenseNet’s densely connected topology. Compared with CNN and ViT models, DenseNet improves diagnostic accuracy by reusing limited features across multiple dimensions. The training set accuracy was 99.82% and 99.90%, while the test set accuracy is 97.03% and 95.08% for the CWRU and JNU datasets under five-fold cross-validation; F1 scores were 0.9739 and 0.9537, respectively. This method achieves highly accurate diagnosis under conditions of non-smooth signals and inconspicuous fault characteristics and is applicable to fault diagnosis scenarios for precision components in aerospace, military systems, robotics, and related fields. Full article
(This article belongs to the Section Process Control and Monitoring)
Show Figures

Figure 1

17 pages, 699 KiB  
Article
Secure K-Means Clustering Scheme for Confidential Data Based on Paillier Cryptosystem
by Zhengqi Zhang, Zixin Xiong and Jun Ye
Appl. Sci. 2025, 15(12), 6918; https://doi.org/10.3390/app15126918 - 19 Jun 2025
Viewed by 213
Abstract
In this paper, we propose a secure homomorphic K-means clustering protocol based on the Paillier cryptosystem to address the urgent need for privacy-preserving clustering techniques in sensitive domains such as healthcare and finance. The protocol uses the additive homomorphism property of the Paillier [...] Read more.
In this paper, we propose a secure homomorphic K-means clustering protocol based on the Paillier cryptosystem to address the urgent need for privacy-preserving clustering techniques in sensitive domains such as healthcare and finance. The protocol uses the additive homomorphism property of the Paillier cryptosystem to perform K-means clustering on the encrypted data, which ensures the confidentiality of the data during the whole calculation process. The protocol consists of three main components: secure computation distance (SCD) protocol, secure cluster assignment (SCA) protocol and secure cluster center update (SUCC) protocol. The SCD protocol securely computes the squared Euclidean distance between the encrypted data point and the encrypted cluster center. The SCA protocol securely assigns data points to clusters based on these cryptographic distances. Finally, the SUCC protocol securely updates the cluster centers without leaking the actual data points as well as the number of intermediate sums. Through security analysis and experimental verification, the effectiveness and practicability of the protocol are proved. This work provides a practical solution for secure clustering based on homomorphic encryption and contributes to the research in the field of privacy-preserving data mining. Although this protocol solves the key problems of secure distance computation, cluster assignment and centroid update, there are still areas for further research. These include optimizing the computational efficiency of the protocol, exploring other homomorphic encryption schemes that may provide better performance, and extending the protocol to handle more complex clustering algorithms. Full article
Show Figures

Figure 1

30 pages, 4382 KiB  
Article
Impacts of Landscape Mosaic Patterns on Habitat Quality Using OLS and GWR Models in Taihang Mountains of Hebei Province, China
by Junming Feng, Peizheng Hao, Jing Hao, Yinran Huang, Miao Yu, Kang Ding and Yang Zhou
Sustainability 2025, 17(12), 5503; https://doi.org/10.3390/su17125503 - 14 Jun 2025
Viewed by 752
Abstract
Based on the fundamental principles of spatial heterogeneity and landscape ecology, landscape mosaic (LM) offers a more effective method for capturing variations in landscape spatial components, patterns, and ecological functions compared to land use and land cover (LULC). This advantage is particularly pronounced [...] Read more.
Based on the fundamental principles of spatial heterogeneity and landscape ecology, landscape mosaic (LM) offers a more effective method for capturing variations in landscape spatial components, patterns, and ecological functions compared to land use and land cover (LULC). This advantage is particularly pronounced when employing the InVEST model to evaluate habitat quality (HQ), as field surveys often yield highly variable results that challenge the accuracy and applicability of LULC-based assessments. This paper focuses on the Taihang Mountain area in Hebei Province as the study region, utilizing the Principal Component Analysis (PCA), Self-Organizing Map (SOM), and Euclidean Distance (ED) model to achieve LM classification of the area. Based on this, the InVEST-HQ assessment is conducted, employing both OLS and GWR models to analyze the correlation between HQ and LM landscape patterns. The results indicate that (1) seven major LULC types were reclassified into nine pillar LM types and eleven transitional LM types, with a significant number of ecotone types emerging between different LULC types, among which cultivated land plays the most prominent role; (2) from 2000 to 2020, the overall HQ in the study area exhibited a continuous deterioration trend, particularly marked by a notable increase in functional areas of HQ areas classified as Level I; (3) factors such as the complexity of patch edges, the continuity between patches, and the diversity of patch types all significantly impact HQ. This study introduces an innovative methodological framework for HQ assessment using LM classifications within InVEST model, offering a robust foundation for comprehensive biodiversity monitoring and informed ecological management in the study area. Full article
Show Figures

Figure 1

14 pages, 2035 KiB  
Article
Integration of YOLOv9 Segmentation and Monocular Depth Estimation in Thermal Imaging for Prediction of Estrus in Sows Based on Pixel Intensity Analysis
by Iyad Almadani, Aaron L. Robinson and Mohammed Abuhussein
Digital 2025, 5(2), 22; https://doi.org/10.3390/digital5020022 - 13 Jun 2025
Viewed by 433
Abstract
Many researchers focus on improving reproductive health in sows and ensuring successful breeding by accurately identifying the optimal time of ovulation through estrus detection. One promising non-contact technique involves using computer vision to analyze temperature variations in thermal images of the sow’s vulva. [...] Read more.
Many researchers focus on improving reproductive health in sows and ensuring successful breeding by accurately identifying the optimal time of ovulation through estrus detection. One promising non-contact technique involves using computer vision to analyze temperature variations in thermal images of the sow’s vulva. However, variations in camera distance during dataset collection can significantly affect the accuracy of this method, as different distances alter the resolution of the region of interest, causing pixel intensity values to represent varying areas and temperatures. This inconsistency hinders the detection of the subtle temperature differences required to distinguish between estrus and non-estrus states. Moreover, failure to maintain a consistent camera distance, along with external factors such as atmospheric conditions and improper calibration, can distort temperature readings, further compromising data accuracy and reliability. Furthermore, without addressing distance variations, the model’s generalizability diminishes, increasing the likelihood of false positives and negatives and ultimately reducing the effectiveness of estrus detection. In our previously proposed methodology for estrus detection in sows, we utilized YOLOv8 for segmentation and keypoint detection, while monocular depth estimation was used for camera calibration. This calibration helps establish a functional relationship between the measurements in the image (such as distances between labia, the clitoris-to-perineum distance, and vulva perimeter) and the depth distance to the camera, enabling accurate adjustments and calibration for our analysis. Estrus classification is performed by comparing new data points with reference datasets using a three-nearest-neighbor voting system. In this paper, we aim to enhance our previous method by incorporating the mean pixel intensity of the region of interest as an additional factor. We propose a detailed four-step methodology coupled with two stages of evaluation. First, we carefully annotate masks around the vulva to calculate its perimeter precisely. Leveraging the advantages of deep learning, we train a model on these annotated images, enabling segmentation using the cutting-edge YOLOv9 algorithm. This segmentation enables the detection of the sow’s vulva, allowing for analysis of its shape and facilitating the calculation of the mean pixel intensity in the region. Crucially, we use monocular depth estimation from the previous method, establishing a functional link between pixel intensity and the distance to the camera, ensuring accuracy in our analysis. We then introduce a classification approach that differentiates between estrus and non-estrus regions based on the mean pixel intensity of the vulva. This classification method involves calculating Euclidean distances between new data points and reference points from two datasets: one for “estrus” and the other for “non-estrus”. The classification process identifies the five closest neighbors from the datasets and applies a majority voting system to determine the label. A new point is classified as “estrus” if the majority of its nearest neighbors are labeled as estrus; otherwise, it is classified as “non-estrus”. This automated approach offers a robust solution for accurate estrus detection. To validate our method, we propose two evaluation stages: first, a quantitative analysis comparing the performance of our new YOLOv9 segmentation model with the older U-Net and YOLOv8 models. Secondly, we assess the classification process by defining a confusion matrix and comparing the results of our previous method, which used the three nearest points, with those of our new model that utilizes five nearest points. This comparison allows us to evaluate the improvements in accuracy and performance achieved with the updated model. The automation of this vital process holds the potential to revolutionize reproductive health management in agriculture, boosting breeding success rates. Through thorough evaluation and experimentation, our research highlights the transformative power of computer vision, pushing forward more advanced practices in the field. Full article
Show Figures

Figure 1

15 pages, 3818 KiB  
Article
Measurement of Maize Leaf Phenotypic Parameters Based on 3D Point Cloud
by Yuchen Su, Ran Li, Miao Wang, Chen Li, Mingxiong Ou, Sumei Liu, Wenhui Hou, Yuwei Wang and Lu Liu
Sensors 2025, 25(9), 2854; https://doi.org/10.3390/s25092854 - 30 Apr 2025
Cited by 1 | Viewed by 525
Abstract
Plant height (PH), leaf width (LW), and leaf angle (LA) are critical phenotypic parameters in maize that reliably indicate plant growth status, lodging resistance, and yield potential. While various lidar-based methods have been developed for acquiring these parameters, existing approaches face limitations, including [...] Read more.
Plant height (PH), leaf width (LW), and leaf angle (LA) are critical phenotypic parameters in maize that reliably indicate plant growth status, lodging resistance, and yield potential. While various lidar-based methods have been developed for acquiring these parameters, existing approaches face limitations, including low automation, prolonged measurement duration, and weak environmental interference resistance. This study proposes a novel estimation method for maize PH, LW, and LA based on point cloud projection. The methodology comprises four key stages. First, 3D point cloud data of maize plants are acquired during middle–late growth stages using lidar sensors. Second, a Gaussian mixture model (GMM) is employed for point cloud registration to enhance plant morphological features, resulting in spliced maize point clouds. Third, filtering techniques remove background noise and weeds, followed by a combined point cloud projection and Euclidean clustering approach for stem–leaf segmentation. Finally, PH is determined by calculating vertical distance from plant apex to base, LW is measured through linear fitting of leaf midveins with perpendicular line intersections on projected contours, and LA is derived from plant skeleton diagrams constructed via linear fitting to identify stem apex, stem–leaf junctions, and midrib points. Field validation demonstrated that the method achieves 99%, 86%, and 97% accuracy for PH, LW, and LA estimation, respectively, enabling rapid automated measurement during critical growth phases and providing an efficient solution for maize cultivation automation. Full article
Show Figures

Figure 1

14 pages, 734 KiB  
Article
MWMOTE-FRIS-INFFC: An Improved Majority Weighted Minority Oversampling Technique for Solving Noisy and Imbalanced Classification Datasets
by Dong Zhang, Xiang Huang, Gen Li, Shengjie Kong and Liang Dong
Appl. Sci. 2025, 15(9), 4670; https://doi.org/10.3390/app15094670 - 23 Apr 2025
Viewed by 504
Abstract
In view of the data of fault diagnosis and good product testing in the industrial field, high-noise unbalanced data samples exist widely, and such samples are very difficult to analyze in the field of data analysis. The oversampling technique has proved to be [...] Read more.
In view of the data of fault diagnosis and good product testing in the industrial field, high-noise unbalanced data samples exist widely, and such samples are very difficult to analyze in the field of data analysis. The oversampling technique has proved to be a simple solution to unbalanced data in the past, but it has no significant resistance to noise. In order to solve the binary classification problem of high-noise unbalanced data, an enhanced majority-weighted minority oversampling technique, MWMOTE-FRIS-INFFC, is introduced in this study, which is specially used for processing noise-unbalanced classified data sets. The method uses Euclidean distance to assign sample weights, synthesizes and combines new samples into samples with larger weights but belonging to a few classes, and thus solves the problem of data scarcity in smaller class clusters. Then, the fuzzy rough instance selection (FRIS) method is used to eliminate the subsets of synthetic minority samples with low clustering membership, which effectively reduces the overfitting tendency of minority samples caused by synthetic oversampling. In addition, the integration of classification fusion iterative filters (INFFC) helps mitigate synthetic noise issues, both raw data and synthetic data noise. On this basis, a series of experiments are designed to improve the performance of 6 oversampling algorithms on 8 data sets by using the MWMOTE-FRIS-INFFC algorithm proposed in this paper. Full article
(This article belongs to the Special Issue Fuzzy Control Systems: Latest Advances and Prospects)
Show Figures

Figure 1

16 pages, 33317 KiB  
Article
Exploiting a Variable-Sized Map and Vicinity-Based Memory for Dynamic Real-Time Planning of Autonomous Robots
by Aristeidis Geladaris, Lampis Papakostas, Athanasios Mastrogeorgiou and Panagiotis Polygerinos
Robotics 2025, 14(4), 44; https://doi.org/10.3390/robotics14040044 - 31 Mar 2025
Cited by 1 | Viewed by 1192
Abstract
This paper presents a complete system for autonomous navigation in GPS-denied environments using a minimal sensor suite that operates onboard a robotic vehicle. Our system utilizes a single camera and, given a target destination without prior knowledge of the environment, replans in real [...] Read more.
This paper presents a complete system for autonomous navigation in GPS-denied environments using a minimal sensor suite that operates onboard a robotic vehicle. Our system utilizes a single camera and, given a target destination without prior knowledge of the environment, replans in real time to generate a collision-free trajectory that avoids static and dynamic obstacles. To achieve this, we introduce, for the first time, a local Euclidean Signed Distance Field (ESDF) map with variable size and resolution, which scales as a function of the vehicle’s velocity. The map is updated at a high rate, requiring minimal computational power. Additionally, a short-term vicinity-based memory is maintained for previously observed areas to facilitate smooth trajectory generation, addressing the limited field-of-view provided by the RGB-D camera. System validation is carried out by deploying our algorithm on a differential drive vehicle in both simulation and real-world experiments involving static and dynamic obstacles. We benchmark our robotic system against state-of-the-art autonomous navigation frameworks, successfully navigating to designated target locations while avoiding obstacles in both static and dynamic scenarios, all without introducing additional computational overhead. Our approach consistently achieves the target goals even in complex settings where current state-of-the-art methods may fall short. Full article
(This article belongs to the Section Aerospace Robotics and Autonomous Systems)
Show Figures

Figure 1

20 pages, 4226 KiB  
Article
Bayesian Ensemble Model with Detection of Potential Misclassification of Wax Bloom in Blueberry Images
by Claudia Arellano, Karen Sagredo, Carlos Muñoz and Joseph Govan
Agronomy 2025, 15(4), 809; https://doi.org/10.3390/agronomy15040809 - 25 Mar 2025
Cited by 1 | Viewed by 557
Abstract
Identifying blueberry characteristics such as the wax bloom is an important task that not only helps in phenotyping (for novel variety development) but also in classifying berries better suited for commercialization. Deep learning techniques for image analysis have long demonstrated their capability for [...] Read more.
Identifying blueberry characteristics such as the wax bloom is an important task that not only helps in phenotyping (for novel variety development) but also in classifying berries better suited for commercialization. Deep learning techniques for image analysis have long demonstrated their capability for solving image classification problems. However, they usually rely on large architectures that could be difficult to implement in the field due to high computational needs. This paper presents a small (only 1502 parameters) Bayesian–CNN ensemble architecture that can be implemented in any small electronic device and is able to classify wax bloom content in images. The Bayesian model was implemented using Keras image libraries and consists of only two convolutional layers (eight and four filters, respectively) and a dense layer. It includes a statistical module with two metrics that combines the results of the Bayesian ensemble to detect potential misclassifications. The first metric is based on the Euclidean distance (L2) between Gaussian mixture models while the second metric is based on a quantile analysis of the binary class predictions. Both metrics attempt to establish whether the model was able to find a good prediction or not. Three experiments were performed: first, the Bayesian–CNN ensemble model was compared with state-of-the-art small architectures. In experiment 2, the metrics for detecting potential misclassifications were evaluated and compared with similar techniques derived from the literature. Experiment 3 reports results while using cross validation and compares performance considering the trade-off between accuracy and the number of samples considered as potentially misclassified (not classified). Both metrics show a competitive performance compared to the state of the art and are able to improve the accuracy of a Bayesian–CNN ensemble model from 96.98% to 98.72±0.54% and 98.38±0.34% for the L2 and r2 metrics, respectively. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

26 pages, 2905 KiB  
Article
Efficient Path Planning for Collision Avoidance of Construction Vibration Robots Based on Euclidean Signed Distance Field and Vector Safety Flight Corridors
by Lei Li, Lingjie Kong, Chong Liu, Hong Wang, Mingyang Wang, Dongxu Pan, Jiasheng Tan, Wanji Yan and Yiang Sun
Sensors 2025, 25(6), 1765; https://doi.org/10.3390/s25061765 - 12 Mar 2025
Viewed by 1095
Abstract
Traditional manual concrete vibration work faces numerous limitations, necessitating the need for efficient automated methods to support this task. This study proposes a path safety optimization method based on safe flight corridors and Euclidean signed distance fields, which is suitable for flexible autonomous [...] Read more.
Traditional manual concrete vibration work faces numerous limitations, necessitating the need for efficient automated methods to support this task. This study proposes a path safety optimization method based on safe flight corridors and Euclidean signed distance fields, which is suitable for flexible autonomous movement of vibrating robots between various vibration points. By utilizing a vector method to generate safe flight corridors and optimizing the path with Euclidean signed distance fields, the proposed method reduces the runtime by 80% compared to the original safe flight corridor method and enhances safety by 50%. On embedded systems, the runtime is less than 10 ms. This study is the first to apply the combination of safe flight corridors and Euclidean distance fields for autonomous navigation in concrete vibration tasks, optimizing the original path into a smooth trajectory that stays clear of obstacles. Actual tests of the vibrating robot showed that, compared to traditional methods, the algorithm allows the robot to safely avoid fixed obstacles and moving workers, increasing the execution efficiency of vibration tasks by 60%. Additionally, on-site experiments conducted at three construction sites demonstrated the robustness of the proposed method. The findings of this study advance the automation of concrete vibration work and hold significant implications for the fields of robotics and civil engineering. Full article
(This article belongs to the Collection Navigation Systems and Sensors)
Show Figures

Figure 1

28 pages, 4157 KiB  
Article
Integrating Quantitative Analyses of Historical and Contemporary Apparel with Educational Applications
by Zlatina Kazlacheva, Daniela Orozova, Nadezhda Angelova, Elena Zurleva, Julieta Ilieva and Zlatin Zlatev
Information 2025, 16(2), 144; https://doi.org/10.3390/info16020144 - 15 Feb 2025
Viewed by 1009
Abstract
In this paper, a comparative analysis of historical and contemporary fashion designs was conducted using quantitative methods and indices. Elements such as silhouettes, color palettes, and structural characteristics were analyzed in order to identify models for reinterpretation of classic fashion costume. Clothing from [...] Read more.
In this paper, a comparative analysis of historical and contemporary fashion designs was conducted using quantitative methods and indices. Elements such as silhouettes, color palettes, and structural characteristics were analyzed in order to identify models for reinterpretation of classic fashion costume. Clothing from four historical periods was studied: Empire, Romanticism, the Victorian era, and Art Nouveau. An image processing algorithm was proposed, through which data on the shapes and colors of historical and contemporary clothing were obtained from digital color images. The most informative of the shape and color indices of contemporary and historical clothing were selected using the RReliefF, FSRNCA, and SFCPP methods. The feature vectors were reduced using the latent variable and t-SNE methods. The obtained data were used to group the clothing according to historical periods. Using Euclidean distances, the relationship between clothing by contemporary designers and the elements of the historical costume used by them was determined. These results were used to create an educational and methodological framework for practical training of students in the field of fashion design. The results of this work can help contemporary designers in interpreting and integrating elements of historical fashion into their collections, adapting them to the needs and preferences of consumers. Full article
(This article belongs to the Special Issue Trends in Artificial Intelligence-Supported E-Learning)
Show Figures

Figure 1

20 pages, 21510 KiB  
Article
Visual Localization Method for Fastener-Nut Disassembly and Assembly Robot Based on Improved Canny and HOG-SED
by Xiangang Cao, Mengzhen Zuo, Guoyin Chen, Xudong Wu, Peng Wang and Yizhe Liu
Appl. Sci. 2025, 15(3), 1645; https://doi.org/10.3390/app15031645 - 6 Feb 2025
Cited by 2 | Viewed by 1000
Abstract
Visual positioning accuracy is crucial for ensuring the successful execution of nut disassembly and assembly tasks by a fastener-nut disassembly and assembly robot. However, disturbances such as on-site lighting changes, abnormal surface conditions of nuts, and complex backgrounds formed by ballast in complex [...] Read more.
Visual positioning accuracy is crucial for ensuring the successful execution of nut disassembly and assembly tasks by a fastener-nut disassembly and assembly robot. However, disturbances such as on-site lighting changes, abnormal surface conditions of nuts, and complex backgrounds formed by ballast in complex railway environments can lead to poor visual positioning accuracy of the fastener nuts, thereby affecting the success rate of the robot’s continuous disassembly and assembly operations. Additionally, the existing method of detecting fasteners first and then positioning nuts has poor applicability in the field. A direct positioning algorithm for spiral rail spikes that combines an improved Canny algorithm with shape feature similarity determination is proposed in response to these issues. Firstly, CLAHE enhances the image, reducing the impact of varying lighting conditions in outdoor work environments on image details. Then, to address the difficulties in extracting the edges of rail spikes caused by abnormal conditions such as water stains, rust, and oil stains on the nuts themselves, the Canny algorithm is improved through three stages, filtering optimization, gradient boosting, and adaptive thresholding, to reduce the impact of edge loss on subsequent rail spike positioning results. Finally, considering the issue of false fitting due to background interference, such as ballast in gradient Hough transformations, the differences in texture and shape features between the rail spike and interference areas are analyzed. The HOG is used to describe the shape features of the area to be screened, and the similarity between the screened area and the standard rail spike template features is compared based on the standard Euclidean distance to determine the rail spike area. Spiral rail spikes are discriminated based on shape features, and the center coordinates of the rail spike are obtained. Experiments were conducted using images collected from the field, and the results showed that the proposed algorithm, when faced with complex environments with multiple interferences, has a correct detection rate higher than 98% and a positioning error mean of 0.9 mm. It exhibits excellent interference resistance and meets the visual positioning accuracy requirements for robot nut disassembly and assembly operations in actual working environments. Full article
(This article belongs to the Section Applied Industrial Technologies)
Show Figures

Figure 1

19 pages, 9180 KiB  
Article
Accurate Real-Time Live Face Detection Using Snapshot Spectral Imaging Method
by Zhihai Wang, Shuai Wang, Weixing Yu, Bo Gao, Chenxi Li and Tianxin Wang
Sensors 2025, 25(3), 952; https://doi.org/10.3390/s25030952 - 5 Feb 2025
Cited by 3 | Viewed by 1494
Abstract
Traditional facial recognition is realized by facial recognition algorithms based on 2D or 3D digital images and has been well developed and has found wide applications in areas related to identification verification. In this work, we propose a novel live face detection (LFD) [...] Read more.
Traditional facial recognition is realized by facial recognition algorithms based on 2D or 3D digital images and has been well developed and has found wide applications in areas related to identification verification. In this work, we propose a novel live face detection (LFD) method by utilizing snapshot spectral imaging technology, which takes advantage of the distinctive reflected spectra from human faces. By employing a computational spectral reconstruction algorithm based on Tikhonov regularization, a rapid and precise spectral reconstruction with a fidelity of over 99% for the color checkers and various types of “face” samples has been achieved. The flat face areas were extracted exactly from the “face” images with Dlib face detection and Euclidean distance selection algorithms. A large quantity of spectra were rapidly reconstructed from the selected areas and compiled into an extensive database. The convolutional neural network model trained on this database demonstrates an excellent capability for predicting different types of “faces” with an accuracy exceeding 98%, and, according to a series of evaluations, the system’s detection time consistently remained under one second, much faster than other spectral imaging LFD methods. Moreover, a pixel-level liveness detection test system is developed and a LFD experiment shows good agreement with theoretical results, which demonstrates the potential of our method to be applied in other recognition fields. The superior performance and compatibility of our method provide an alternative solution for accurate, highly integrated video LFD applications. Full article
(This article belongs to the Special Issue Advances in Optical Sensing, Instrumentation and Systems: 2nd Edition)
Show Figures

Figure 1

48 pages, 1898 KiB  
Essay
The Code Underneath
by Julio Rives
Axioms 2025, 14(2), 106; https://doi.org/10.3390/axioms14020106 - 30 Jan 2025
Viewed by 810
Abstract
An inverse-square probability mass function (PMF) is at the Newcomb–Benford law (NBL)’s root and ultimately at the origin of positional notation and conformality. PrZ=2Z2, where ZZ+. Under its tail, we find information [...] Read more.
An inverse-square probability mass function (PMF) is at the Newcomb–Benford law (NBL)’s root and ultimately at the origin of positional notation and conformality. PrZ=2Z2, where ZZ+. Under its tail, we find information as harmonic likelihood Ls,t=Ht1Hs1, where Hn is the nth harmonic number. The global Q-NBL is Prb,q=Lq,q+1L1,b=qHb11, where b is the base and q is a quantum (1q<b). Under its tail, we find information as logarithmic likelihood i,j=lnji. The fiducial R-NBL is Prr,d=d,d+11,r=logr1+1d, where rb is the radix of a local complex system. The global Bayesian rule multiplies the correlation between two numbers, s and t, by a likelihood ratio that is the NBL probability of bucket s,t relative to b’s support. To encode the odds of quantum j against i locally, we multiply the prior odds Prb,jPrb,i by a likelihood ratio, which is the NBL probability of bin i,j relative to r’s support; the local Bayesian coding rule is o˜j:i|r=ijlogrji. The Bayesian rule to recode local data is o˜j:i|r=o˜j:i|rlnrlnr. Global and local Bayesian data are elements of the algebraic field of “gap ratios”, ABCD. The cross-ratio, the central tool in conformal geometry, is a subclass of gap ratio. A one-dimensional coding source reflects the global Bayesian data of the harmonic external world, the annulus xQ|1x<b, into the local Bayesian data of its logarithmic coding space, the ball xQ|x<11b. The source’s conformal encoding function is y=logr2x1, where x is the observed Euclidean distance to an object’s position. The conformal decoding function is x=121+ry. Both functions, unique under basic requirements, enable information- and granularity-invariant recursion to model the multiscale reality. Full article
(This article belongs to the Special Issue Mathematical Modelling of Complex Systems)
Show Figures

Figure 1

20 pages, 4706 KiB  
Article
Band Selection Algorithm Based on Multi-Feature and Affinity Propagation Clustering
by Junbin Zhuang, Wenying Chen, Xunan Huang and Yunyi Yan
Remote Sens. 2025, 17(2), 193; https://doi.org/10.3390/rs17020193 - 8 Jan 2025
Cited by 7 | Viewed by 975
Abstract
Hyperspectral images are high-dimensional data containing rich spatial, spectral, and radiometric information, widely used in geological mapping, urban remote sensing, and other fields. However, due to the characteristics of hyperspectral remote sensing images—such as high redundancy, strong correlation, and large data volumes—the classification [...] Read more.
Hyperspectral images are high-dimensional data containing rich spatial, spectral, and radiometric information, widely used in geological mapping, urban remote sensing, and other fields. However, due to the characteristics of hyperspectral remote sensing images—such as high redundancy, strong correlation, and large data volumes—the classification and recognition of these images present significant challenges. In this paper, we propose a band selection method (GE-AP) based on multi-feature extraction and the Affine Propagation Clustering (AP) algorithm for dimensionality reduction of hyperspectral images, aiming to improve classification accuracy and processing efficiency. In this method, texture features of the band images are extracted using the Gray-Level Co-occurrence Matrix (GLCM), and the Euclidean distance between bands is calculated. A similarity matrix is then constructed by integrating multi-feature information. The AP algorithm clusters the bands of the hyperspectral images to achieve effective band dimensionality reduction. Through simulation and comparison experiments evaluating the overall classification accuracy (OA) and Kappa coefficient, it was found that the GE-AP method achieves the highest OA and Kappa coefficient compared to three other methods, with maximum increases of 8.89% and 13.18%, respectively. This verifies that the proposed method outperforms traditional single-information methods in handling spatial and spectral redundancy between bands, demonstrating good adaptability and stability. Full article
Show Figures

Figure 1

Back to TopTop