Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (228)

Search Parameters:
Keywords = classification of contours

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 6143 KiB  
Article
Optical Character Recognition Method Based on YOLO Positioning and Intersection Ratio Filtering
by Kai Cui, Qingpo Xu, Yabin Ding, Jiangping Mei, Ying He and Haitao Liu
Symmetry 2025, 17(8), 1198; https://doi.org/10.3390/sym17081198 - 27 Jul 2025
Abstract
Driven by the rapid development of e-commerce and intelligent logistics, the volume of express delivery services has surged, making the efficient and accurate identification of shipping information a core requirement for automatic sorting systems. However, traditional Optical Character Recognition (OCR) technology struggles to [...] Read more.
Driven by the rapid development of e-commerce and intelligent logistics, the volume of express delivery services has surged, making the efficient and accurate identification of shipping information a core requirement for automatic sorting systems. However, traditional Optical Character Recognition (OCR) technology struggles to meet the accuracy and real-time demands of complex logistics scenarios due to challenges such as image distortion, uneven illumination, and field overlap. This paper proposes a three-level collaborative recognition method based on deep learning that facilitates structured information extraction through regional normalization, dual-path parallel extraction, and a dynamic matching mechanism. First, the geometric distortion associated with contour detection and the lightweight direction classification model has been improved. Second, by integrating the enhanced YOLOv5s for key area localization with the upgraded PaddleOCR for full-text character extraction, a dual-path parallel architecture for positioning and recognition has been constructed. Finally, a dynamic space–semantic joint matching module has been designed that incorporates anti-offset IoU metrics and hierarchical semantic regularization constraints, thereby enhancing matching robustness through density-adaptive weight adjustment. Experimental results indicate that the accuracy of this method on a self-constructed dataset is 89.5%, with an F1 score of 90.1%, representing a 24.2% improvement over traditional OCR methods. The dynamic matching mechanism elevates the average accuracy of YOLOv5s from 78.5% to 89.7%, surpassing the Faster R-CNN benchmark model while maintaining a real-time processing efficiency of 76 FPS. This study offers a lightweight and highly robust solution for the efficient extraction of order information in complex logistics scenarios, significantly advancing the intelligent upgrading of sorting systems. Full article
(This article belongs to the Section Physics)
Show Figures

Figure 1

24 pages, 5980 KiB  
Article
Extraction of Agricultural Parcels Using Vector Contour Segmentation Network with Hybrid Backbone and Multiscale Edge Feature Extraction
by Feiyu Teng, Ling Wu and Shukuan Liu
Remote Sens. 2025, 17(15), 2556; https://doi.org/10.3390/rs17152556 - 23 Jul 2025
Viewed by 185
Abstract
The accurate acquisition of agricultural parcels from remote sensing images is crucial for agricultural management and crop production monitoring. Most of the existing agricultural parcel extraction methods comprise semantic segmentation through remote sensing images, pixel-level classification, and then vectorized raster data. However, this [...] Read more.
The accurate acquisition of agricultural parcels from remote sensing images is crucial for agricultural management and crop production monitoring. Most of the existing agricultural parcel extraction methods comprise semantic segmentation through remote sensing images, pixel-level classification, and then vectorized raster data. However, this approach faces challenges such as internal cavities, unclosed boundaries, and fuzzy edges, which hinder the accurate extraction of complete agricultural parcels. Therefore, this paper proposes a vector contour segmentation network based on the hybrid backbone and multiscale edge feature extraction module (HEVNet). We use the extraction of vector polygons of agricultural parcels by predicting the location of contour points, which avoids the above problems that may occur when raster data is converted to vector data. Simultaneously, this paper proposes a hybrid backbone for feature extraction. A hybrid backbone combines the respective advantages of the Resnet and Transformer backbone networks to balance local features and global features in feature extraction. In addition, we propose a multiscale edge feature extraction module, which can extract and enhance the edge features of different scales to prevent the possible loss of edge details in down sampling. This paper uses the datasets of Denmark, the Netherlands, iFLYTEK, and Hengyang in China to evaluate our model. The obtained IOU indexes were 67.92%, 81.35%, 78.02%, and 66.35%, which are higher than previous IOU indexes based on the optimal model (DBBANet). The results demonstrate that the proposed model significantly enhances the integrity and edge accuracy of agricultural parcel extraction. Full article
Show Figures

Figure 1

27 pages, 3888 KiB  
Article
Deep Learning-Based Algorithm for the Classification of Left Ventricle Segments by Hypertrophy Severity
by Wafa Baccouch, Bilel Hasnaoui, Narjes Benameur, Abderrazak Jemai, Dhaker Lahidheb and Salam Labidi
J. Imaging 2025, 11(7), 244; https://doi.org/10.3390/jimaging11070244 - 20 Jul 2025
Viewed by 290
Abstract
In clinical practice, left ventricle hypertrophy (LVH) continues to pose a considerable challenge, highlighting the need for more reliable diagnostic approaches. This study aims to propose an automated framework for the quantification of LVH extent and the classification of myocardial segments according to [...] Read more.
In clinical practice, left ventricle hypertrophy (LVH) continues to pose a considerable challenge, highlighting the need for more reliable diagnostic approaches. This study aims to propose an automated framework for the quantification of LVH extent and the classification of myocardial segments according to hypertrophy severity using a deep learning-based algorithm. The proposed method was validated on 133 subjects, including both healthy individuals and patients with LVH. The process starts with automatic LV segmentation using U-Net and the segmentation of the left ventricle cavity based on the American Heart Association (AHA) standards, followed by the division of each segment into three equal sub-segments. Then, an automated quantification of regional wall thickness (RWT) was performed. Finally, a convolutional neural network (CNN) was developed to classify each myocardial sub-segment according to hypertrophy severity. The proposed approach demonstrates strong performance in contour segmentation, achieving a Dice Similarity Coefficient (DSC) of 98.47% and a Hausdorff Distance (HD) of 6.345 ± 3.5 mm. For thickness quantification, it reaches a minimal mean absolute error (MAE) of 1.01 ± 1.16. Regarding segment classification, it achieves competitive performance metrics compared to state-of-the-art methods with an accuracy of 98.19%, a precision of 98.27%, a recall of 99.13%, and an F1-score of 98.7%. The obtained results confirm the high performance of the proposed method and highlight its clinical utility in accurately assessing and classifying cardiac hypertrophy. This approach provides valuable insights that can guide clinical decision-making and improve patient management strategies. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

23 pages, 6199 KiB  
Article
PDAA: An End-to-End Polygon Dynamic Adjustment Algorithm for Building Footprint Extraction
by Longjie Luo, Jiangchen Cai, Bin Feng and Liufeng Tao
Remote Sens. 2025, 17(14), 2495; https://doi.org/10.3390/rs17142495 - 17 Jul 2025
Viewed by 174
Abstract
Buildings are a significant component of urban space and are essential to smart cities, catastrophe monitoring, and land use planning. However, precisely extracting building polygons from remote sensing images remains difficult because of the variety of building designs and intricate backgrounds. This paper [...] Read more.
Buildings are a significant component of urban space and are essential to smart cities, catastrophe monitoring, and land use planning. However, precisely extracting building polygons from remote sensing images remains difficult because of the variety of building designs and intricate backgrounds. This paper proposes an end-to-end polygon dynamic adjustment algorithm (PDAA) to improve the accuracy and geometric consistency of building contour extraction by dynamically generating and optimizing polygon vertices. The method first locates building instances through the region of interest (RoI) to generate initial polygons, and then uses four core modules for collaborative optimization: (1) the feature enhancement module captures local detail features to improve the robustness of vertex positioning; (2) the contour vertex tuning module fine-tunes vertex coordinates through displacement prediction to enhance geometric accuracy; (3) the learnable redundant vertex removal module screens key vertices based on a classification mechanism to eliminate redundancy; and (4) the missing vertex completion module iteratively restores missed vertices to ensure the integrity of complex contours. PDAA dynamically adjusts the number of vertices to adapt to the geometric characteristics of different buildings, while simplifying the prediction process and reducing computational complexity. Experiments on public datasets such as WHU, Vaihingen, and Inria show that PDAA significantly outperforms existing methods in terms of average precision (AP) and polygon similarity (PolySim). It is at least 2% higher than existing methods in terms of average precision (AP), and the generated polygonal contours are closer to the real building geometry. Values of 75.4% AP and 84.9% PolySim were achieved on the WHU dataset, effectively solving the problems of redundant vertices and contour smoothing, and providing high-precision building vector data support for scenarios such as smart cities and emergency response. Full article
Show Figures

Figure 1

24 pages, 4442 KiB  
Article
Time-Series Correlation Optimization for Forest Fire Tracking
by Dongmei Yang, Guohao Nie, Xiaoyuan Xu, Debin Zhang and Xingmei Wang
Forests 2025, 16(7), 1101; https://doi.org/10.3390/f16071101 - 3 Jul 2025
Viewed by 289
Abstract
Accurate real-time tracking of forest fires using UAV platforms is crucial for timely early warning, reliable spread prediction, and effective autonomous suppression. Existing detection-based multi-object tracking methods face challenges in accurately associating targets and maintaining smooth tracking trajectories in complex forest environments. These [...] Read more.
Accurate real-time tracking of forest fires using UAV platforms is crucial for timely early warning, reliable spread prediction, and effective autonomous suppression. Existing detection-based multi-object tracking methods face challenges in accurately associating targets and maintaining smooth tracking trajectories in complex forest environments. These difficulties stem from the highly nonlinear movement of flames relative to the observing UAV and the lack of robust fire-specific feature modeling. To address these challenges, we introduce AO-OCSORT, an association-optimized observation-centric tracking framework designed to enhance robustness in dynamic fire scenarios. AO-OCSORT builds on the YOLOX detector. To associate detection results across frames and form smooth trajectories, we propose a temporal–physical similarity metric that utilizes temporal information from the short-term motion of targets and incorporates physical flame characteristics derived from optical flow and contours. Subsequently, scene classification and low-score filtering are employed to develop a hierarchical association strategy, reducing the impact of false detections and interfering objects. Additionally, a virtual trajectory generation module is proposed, employing a kinematic model to maintain trajectory continuity during flame occlusion. Locally evaluated on the 1080P-resolution FireMOT UAV wildfire dataset, AO-OCSORT achieves a 5.4% improvement in MOTA over advanced baselines at 28.1 FPS, meeting real-time requirements. This improvement enhances the reliability of fire front localization, which is crucial for forest fire management. Furthermore, AO-OCSORT demonstrates strong generalization, achieving 41.4% MOTA on VisDrone, 80.9% on MOT17, and 92.2% MOTA on DanceTrack. Full article
(This article belongs to the Special Issue Advanced Technologies for Forest Fire Detection and Monitoring)
Show Figures

Figure 1

30 pages, 8644 KiB  
Article
Development of a UR5 Cobot Vision System with MLP Neural Network for Object Classification and Sorting
by Szymon Kluziak and Piotr Kohut
Information 2025, 16(7), 550; https://doi.org/10.3390/info16070550 - 27 Jun 2025
Viewed by 356
Abstract
This paper presents the implementation of a vision system for a collaborative robot equipped with a web camera and a Python-based control algorithm for automated object-sorting tasks. The vision system aims to detect, classify, and manipulate objects within the robot’s workspace using only [...] Read more.
This paper presents the implementation of a vision system for a collaborative robot equipped with a web camera and a Python-based control algorithm for automated object-sorting tasks. The vision system aims to detect, classify, and manipulate objects within the robot’s workspace using only 2D camera images. The vision system was integrated with the Universal Robots UR5 cobot and designed for object sorting based on shape recognition. The software stack includes OpenCV for image processing, NumPy for numerical operations, and scikit-learn for multilayer perceptron (MLP) models. The paper outlines the calibration process, including lens distortion correction and camera-to-robot calibration in a hand-in-eye configuration to establish the spatial relationship between the camera and the cobot. Object localization relied on a virtual plane aligned with the robot’s workspace. Object classification was conducted using contour similarity with Hu moments, SIFT-based descriptors with FLANN matching, and MLP-based neural models trained on preprocessed images. Conducted performance evaluations encompassed accuracy metrics for used identification methods (MLP classifier, contour similarity, and feature descriptor matching) and the effectiveness of the vision system in controlling the cobot for sorting tasks. The evaluation focused on classification accuracy and sorting effectiveness, using sensitivity, specificity, precision, accuracy, and F1-score metrics. Results showed that neural network-based methods outperformed traditional methods in all categories, concurrently offering more straightforward implementation. Full article
(This article belongs to the Section Information Applications)
Show Figures

Graphical abstract

16 pages, 2853 KiB  
Article
Detecting Lameness in Dairy Cows Based on Gait Feature Mapping and Attention Mechanisms
by Xi Kang, Junjie Liang, Qian Li and Gang Liu
Agriculture 2025, 15(12), 1276; https://doi.org/10.3390/agriculture15121276 - 13 Jun 2025
Viewed by 551
Abstract
Lameness significantly compromises dairy cattle welfare and productivity. Early detection enables prompt intervention, enhancing both animal health and farm efficiency. Current computer vision approaches often rely on isolated lameness feature quantification, disregarding critical interdependencies among gait parameters. This limitation is exacerbated by the [...] Read more.
Lameness significantly compromises dairy cattle welfare and productivity. Early detection enables prompt intervention, enhancing both animal health and farm efficiency. Current computer vision approaches often rely on isolated lameness feature quantification, disregarding critical interdependencies among gait parameters. This limitation is exacerbated by the distinct kinematic patterns exhibited across lameness severity grades, ultimately reducing detection accuracy. This study presents an integrated computer vision and deep-learning framework for dairy cattle lameness detection and severity classification. The proposed system comprises (1) a Cow Lameness Feature Map (CLFM) model extracting holistic gait kinematics (hoof trajectories and dorsal contour) from walking sequences, and (2) a DenseNet-Integrated Convolutional Attention Module (DCAM) that mitigates inter-individual variability through multi-feature fusion. Experimental validation utilized 3150 annotated lameness feature maps derived from 175 Holsteins under natural walking conditions, demonstrating robust classification performance. The classification accuracy of the method for varying degrees of lameness was 92.80%, the sensitivity was 89.21%, and the specificity was 94.60%. The detection of healthy and lameness dairy cows’ accuracy was 99.05%, the sensitivity was 100%, and the specificity was 98.57%. The experimental results demonstrate the advantage of implementing lameness severity-adaptive feature weighting through hierarchical network architecture. Full article
(This article belongs to the Special Issue Computer Vision Analysis Applied to Farm Animals)
Show Figures

Figure 1

40 pages, 3224 KiB  
Article
A Comparative Study of Image Processing and Machine Learning Methods for Classification of Rail Welding Defects
by Mohale Emmanuel Molefe, Jules Raymond Tapamo and Siboniso Sithembiso Vilakazi
J. Sens. Actuator Netw. 2025, 14(3), 58; https://doi.org/10.3390/jsan14030058 - 29 May 2025
Viewed by 1730
Abstract
Defects formed during the thermite welding process of two sections of rails require the welded joints to be inspected for quality, and the most used non-destructive method for inspection is radiography testing. However, the conventional defect investigation process from the obtained radiography images [...] Read more.
Defects formed during the thermite welding process of two sections of rails require the welded joints to be inspected for quality, and the most used non-destructive method for inspection is radiography testing. However, the conventional defect investigation process from the obtained radiography images is costly, lengthy, and subjective as it is conducted manually by trained experts. Additionally, it has been shown that most rail breaks occur due to a crack initiated from the weld joint defect that was either misclassified or undetected. To improve the condition monitoring of rails, the railway industry requires an automated defect investigation system capable of detecting and classifying defects automatically. Therefore, this work proposes a method based on image processing and machine learning techniques for the automated investigation of defects. Histogram Equalization methods are first applied to improve image quality. Then, the extraction of the weld joint from the image background is achieved using the Chan–Vese Active Contour Model. A comparative investigation is carried out between Deep Convolution Neural Networks, Local Binary Pattern extractors, and Bag of Visual Words methods (with the Speeded-Up Robust Features extractor) for extracting features in weld joint images. Classification of features extracted by local feature extractors is achieved using Support Vector Machines, K-Nearest Neighbor, and Naive Bayes classifiers. The highest classification accuracy of 95% is achieved by the Deep Convolution Neural Network model. A Graphical User Interface is provided for the onsite investigation of defects. Full article
(This article belongs to the Special Issue AI-Assisted Machine-Environment Interaction)
Show Figures

Figure 1

21 pages, 43908 KiB  
Article
WHA-Net: A Low-Complexity Hybrid Model for Accurate Pseudopapilledema Classification in Fundus Images
by Junpeng Pei, Yousong Wang, Mingliang Ge, Jun Li, Yixing Li, Wei Wang and Xiaohong Zhou
Bioengineering 2025, 12(5), 550; https://doi.org/10.3390/bioengineering12050550 - 21 May 2025
Viewed by 540
Abstract
The fundus manifestations of pseudopapilledema closely resemble those of optic disc edema, making their differentiation particularly challenging in certain clinical situations. However, rapid and accurate diagnosis is crucial for alleviating patient anxiety and guiding treatment strategies. This study proposes an efficient low-complexity hybrid [...] Read more.
The fundus manifestations of pseudopapilledema closely resemble those of optic disc edema, making their differentiation particularly challenging in certain clinical situations. However, rapid and accurate diagnosis is crucial for alleviating patient anxiety and guiding treatment strategies. This study proposes an efficient low-complexity hybrid model, WHA-Net, which innovatively integrates three core modules to achieve precise auxiliary diagnosis of pseudopapilledema. First, the wavelet convolution (WTC) block is introduced to enhance the model’s characterization capability for vessel and optic disc edge details in fundus images through 2D wavelet transform and deep convolution. Additionally, the hybrid attention inverted residual (HAIR) block is incorporated to extract critical features such as vascular morphology, hemorrhages, and exudates. Finally, the Agent-MViT module effectively captures the continuity features of optic disc contours and retinal vessels in fundus images while reducing the computational complexity of traditional Transformers. The model was trained and evaluated on a dataset of 1793 rigorously curated fundus images, comprising 895 normal optic discs, 485 optic disc edema (ODE), and 413 pseudopapilledema (PPE) cases. On the test set, the model achieved outstanding performance, with 97.79% accuracy, 95.55% precision, 95.69% recall, and 98.53% specificity. Comparative experiments confirm the superiority of WHA-Net in classification tasks, while ablation studies validate the effectiveness and rationality of each module’s combined design. This research provides a clinically valuable solution for the automated differential diagnosis of pseudopapilledema, with both computational efficiency and diagnostic reliability. Full article
(This article belongs to the Section Biomedical Engineering and Biomaterials)
Show Figures

Figure 1

19 pages, 18677 KiB  
Article
Generation of Structural Components for Indoor Spaces from Point Clouds
by Junhyuk Lee, Yutaka Ohtake, Takashi Nakano and Daisuke Sato
Sensors 2025, 25(10), 3012; https://doi.org/10.3390/s25103012 - 10 May 2025
Viewed by 474
Abstract
Point clouds from laser scanners have been widely used in recent research on indoor modeling methods. Currently, particularly in data-driven modeling methods, data preprocessing for dividing structural components and nonstructural components is required before modeling. In this paper, we propose an indoor modeling [...] Read more.
Point clouds from laser scanners have been widely used in recent research on indoor modeling methods. Currently, particularly in data-driven modeling methods, data preprocessing for dividing structural components and nonstructural components is required before modeling. In this paper, we propose an indoor modeling method without the classification of structural and nonstructural components. A pre-mesh is generated for constructing the adjacency relations of point clouds, and plane components are extracted using planar-based region growing. Then, the distance fields of each plane are calculated, and voxel data referred to as a surface confidence map are obtained. Subsequently, the inside and outside of the indoor model are classified using a graph-cut algorithm. Finally, indoor models with watertight meshes are generated via dual contouring and mesh refinement. The experimental results showed that the point-to-mesh error ranged from approximately 2 mm to 50 mm depending on the dataset. Furthermore, completeness—measured as the proportion of original point-cloud data successfully reconstructed into the mesh—approached 1.0 for single-room datasets and reached around 0.95 for certain multiroom and synthetic datasets. These results demonstrate the effectiveness of the proposed method in automatically removing non-structural components and generating clean structural meshes. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

8 pages, 3998 KiB  
Technical Note
Mini Abdomen Experience: A Novel Approach for Mini-Abdominoplasty Minimally Invasive (MAMI) Abdominal Contouring
by Rodrigo Ferraz Galhego, Tulio Martins, Alvaro Cota Carvalho, Marco Faria-Correa and Raquel Nogueira
Surg. Tech. Dev. 2025, 14(2), 16; https://doi.org/10.3390/std14020016 - 9 May 2025
Viewed by 864
Abstract
Purpose: Our aim is to offer an additional surgical option for patients with rectus diastasis, with or without associated abdominal wall hernias, through a minimally invasive approach with endoscopic surgical correction, presenting a new method for abdominal contouring via minimally invasive mini-abdominoplasty (MAMI). [...] Read more.
Purpose: Our aim is to offer an additional surgical option for patients with rectus diastasis, with or without associated abdominal wall hernias, through a minimally invasive approach with endoscopic surgical correction, presenting a new method for abdominal contouring via minimally invasive mini-abdominoplasty (MAMI). Ideas: According to the European Hernia Society (EHS) classification for RD, a widening greater than 2 cm of the linea alba is generally considered an indication for surgical correction. Recent approaches, such as MILA and SCOLA, are indicated for patients with a body mass index (BMI) of up to 28, based solely on height and weight. However, some authors consider this insufficient for determining the best surgical indication. Despite advances in skin retraction, there is still no evidence on how these devices affect postoperative outcomes when added to these techniques, as they depend on multiple factors such as age, skin firmness, number of passes, applied energy, etc. Consequently, even patients with a BMI of up to 28 may present significant flaccidity both above and below the umbilicus, as well as poor skin quality (thin, lax, with stretch marks), making SCOLA or MILA surgery alone unsuitable due to possible skin redundancy after surgery. Similarly, even patients with a high-positioned umbilicus, moderate flaccidity, and rectus diastasis, who in the past would have been strictly indicated for abdominoplasty, may benefit from mini-abdominoplasty with a minimally invasive approach (MAMI). Discussion: The main objective of this study is to provide another surgical option for patients who would otherwise be indicated for abdominoplasty and also for those undergoing MILA or SCOLA who still require minor skin removal to enhance the surgical result. Based on our experience, mini-abdominoplasty with a minimally invasive approach (MAMI) has the potential to serve a larger number of patients, since most present degrees of skin laxity that, even after using technologies, require skin excision. In addition to complementing the results, it reduces complications, results in smaller scars, allows a better correction and visualization of the diastasis, avoids periumbilical scars, and offers faster recovery compared to abdominoplasty. Conclusions: MAMI surgery has proven to be a safe and reproducible approach for selected women who wish to restore feminine body features after pregnancy and achieve a quick recovery. It yields satisfactory esthetic results due to the minimized scar, preservation of the natural umbilical scar, and improved surgical correction of rectus diastasis. Full article
(This article belongs to the Special Issue New Insights into Plastic Aesthetic and Regenerative Surgery)
Show Figures

Figure 1

29 pages, 7837 KiB  
Article
Automated Eddy Identification and Tracking in the Northwest Pacific Based on Conventional Altimeter and SWOT Data
by Lan Zhang, Cheinway Hwang, Han-Yang Liu, Emmy T. Y. Chang and Daocheng Yu
Remote Sens. 2025, 17(10), 1665; https://doi.org/10.3390/rs17101665 - 9 May 2025
Viewed by 693
Abstract
Eddy identification and tracking are essential for understanding ocean dynamics. This study employed the elliptical Gaussian function (EGF) simulations and the py-eddy-tracker (PET) algorithm, validated by Surface Velocity Program (SVP) drifter data, to track eddies in the western North Pacific Ocean. The PET [...] Read more.
Eddy identification and tracking are essential for understanding ocean dynamics. This study employed the elliptical Gaussian function (EGF) simulations and the py-eddy-tracker (PET) algorithm, validated by Surface Velocity Program (SVP) drifter data, to track eddies in the western North Pacific Ocean. The PET method effectively identified large- and mesoscale eddies but struggled with submesoscale features, indicating areas for improvement. Simulated satellite altimetry by EGF, mirroring Surface Water and Ocean Topography (SWOT)’s high-resolution observations, confirmed PET’s capability in processing fine-scale data, though accuracy declined for submesoscale eddies. Over 22 years, 1,188,649 eddies were identified, mainly concentrated east of Taiwan. Temporal analysis showed interannual variability, more cyclonic than anticyclonic eddies, and a seasonal peak in spring, likely influenced by marine conditions. Short-lived eddies were uniformly distributed, while long-lived ones followed major currents, validating PET’s robustness with SVP drifters. The launch of the SWOT satellite in 2022 has enhanced fine-scale ocean studies, enabling the detection of submesoscale eddies previously unresolved by conventional altimetry. SWOT observations reveal intricate eddy structures, including small cyclonic features in the northwestern Pacific, demonstrating its potential for improving eddy tracking. Future work should refine the PET algorithm for SWOT’s swath altimetry, addressing data gaps and unclosed contours. Integrating SWOT with in situ drifters, numerical models, and machine learning will further enhance eddy classification, benefiting ocean circulation studies and climate modeling. Full article
(This article belongs to the Special Issue Satellite Remote Sensing for Ocean and Coastal Environment Monitoring)
Show Figures

Figure 1

17 pages, 5998 KiB  
Article
Antimony Ore Identification Method for Small Sample X-Ray Images with Random Distribution
by Lanhao Wang, Chen Ding, Hongdong Hu, Hongyan Wang and Wei Dai
Minerals 2025, 15(5), 483; https://doi.org/10.3390/min15050483 - 5 May 2025
Viewed by 437
Abstract
The performance of image processing is crucial for accurately sorting antimony ore, yet several challenges persist. Existing image segmentation methods struggle with X-ray ore images that contain high noise and interference. Additionally, traditional classification methods primarily utilize single physical properties, such as the [...] Read more.
The performance of image processing is crucial for accurately sorting antimony ore, yet several challenges persist. Existing image segmentation methods struggle with X-ray ore images that contain high noise and interference. Additionally, traditional classification methods primarily utilize single physical properties, such as the R-value, leading to low accuracy. To address segmentation issues, this paper proposes an improved method based on concave detection. This involves obtaining a binary image of antimony ore through adaptive threshold segmentation, extracting the ore contour, and detecting concave points using advanced techniques. The influence of interfering concave points is minimized with the three-wire method, while noise points are reduced through morphological operations based on area calculations. This results in accurate segmentation of the adherent antimony ore. For classification, this paper introduces a training method that combines transfer learning with shallow partial initialization. Transfer learning is employed to mitigate the challenges of limited antimony ore datasets when using deep learning models. The pre-trained model is then partially re-initialized according to a tailored strategy. Finally, fine-tuning is performed on the antimony ore dataset to achieve optimal results. Experimental results show that the antimony ore segmentation method proposed in this paper achieves accurate segmentation (96.27% correct segmentation rate). The antimony ore classification model training method proposed in this paper can effectively release some redundant parameters of the pre-training model, and has better classification performance on the target dataset (86.76% accuracy). Both methods are superior to traditional methods. Full article
(This article belongs to the Special Issue Recent Advances in Ore Comminution)
Show Figures

Figure 1

24 pages, 10867 KiB  
Article
Machine Learning-Based Smartphone Grip Posture Image Recognition and Classification
by Dohoon Kwon, Xin Cui, Yejin Lee, Younggeun Choi, Aditya Subramani Murugan, Eunsik Kim and Heecheon You
Appl. Sci. 2025, 15(9), 5020; https://doi.org/10.3390/app15095020 - 30 Apr 2025
Viewed by 604
Abstract
Uncomfortable smartphone grip postures resulting from inappropriate user interface design can degrade smartphone usability. This study aims to develop a classification model for smartphone grip postures by detecting the positions of the hand and fingers on smartphones using machine learning techniques. Seventy participants [...] Read more.
Uncomfortable smartphone grip postures resulting from inappropriate user interface design can degrade smartphone usability. This study aims to develop a classification model for smartphone grip postures by detecting the positions of the hand and fingers on smartphones using machine learning techniques. Seventy participants (35 males and 35 females with an average of 38.5 ± 12.2 years) with varying hand sizes participated in the smartphone grip posture experiment. The participants performed four tasks (making calls, listening to music, sending text messages, and web browsing) using nine smartphone mock-ups of different sizes, while cameras positioned above and below their hands recorded their usage. A total of 3278 grip posture images were extracted from the recorded videos and were preprocessed using a skin color and hand contour detection model. The grip postures were categorized into seven types, and three models (MobileNetV2, Inception V3, and ResNet-50), along with an ensemble model, were used for classification. The ensemble-based classification model achieved an accuracy of 95.9%, demonstrating higher accuracy than the individual models: MobileNetV2 (90.6%), ResNet-50 (94.2%), and Inception V3 (85.9%). The classification model developed in this study can efficiently analyze grip postures, thereby improving usability in the development of smartphones and other electronic devices. Full article
(This article belongs to the Special Issue Novel Approaches and Applications in Ergonomic Design III)
Show Figures

Figure 1

33 pages, 20540 KiB  
Article
SG-ResNet: Spatially Adaptive Gabor Residual Networks with Density-Peak Guidance for Joint Image Steganalysis and Payload Location
by Zhengliang Lai, Chenyi Wu, Xishun Zhu, Jianhua Wu and Guiqin Duan
Mathematics 2025, 13(9), 1460; https://doi.org/10.3390/math13091460 - 29 Apr 2025
Viewed by 430
Abstract
Image steganalysis detects hidden information in digital images by identifying statistical anomalies, serving as a forensic tool to reveal potential covert communication. The field of deep learning-based image steganography has relatively scarce effective steganalysis methods, particularly those designed to extract hidden information. This [...] Read more.
Image steganalysis detects hidden information in digital images by identifying statistical anomalies, serving as a forensic tool to reveal potential covert communication. The field of deep learning-based image steganography has relatively scarce effective steganalysis methods, particularly those designed to extract hidden information. This paper introduces an innovative image steganalysis method based on generative adaptive Gabor residual networks with density-peak guidance (SG-ResNet). SG-ResNet employs a dual-stream collaborative architecture to achieve precise detection and reconstruction of steganographic information. The classification subnet utilizes dual-frequency adaptive Gabor convolutional kernels to decouple high-frequency texture and low-frequency contour components in images. It combines a density peak clustering with three quantization and transformation-enhanced convolutional blocks to generate steganographic covariance matrices, enhancing the weak steganographic signals. The reconstruction subnet synchronously constructs multi-scale features, preserves steganographic spatial fingerprints with channel-separated residual spatial rich model and pixel reorganization operators, and achieves sub-pixel-level steganographic localization via iterative optimization mechanism of feedback residual modules. Experimental results obtained with datasets generated by several public steganography algorithms demonstrate that SG-ResNet achieves State-of-the-Art results in terms of detection accuracy, with 0.94, and with a PSNR of 29 between reconstructed and original secret images. Full article
(This article belongs to the Special Issue New Solutions for Multimedia and Artificial Intelligence Security)
Show Figures

Figure 1

Back to TopTop