Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,887)

Search Parameters:
Keywords = data registration

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 2562 KiB  
Article
Semantic-Aware Cross-Modal Transfer for UAV-LiDAR Individual Tree Segmentation
by Fuyang Zhou, Haiqing He, Ting Chen, Tao Zhang, Minglu Yang, Ye Yuan and Jiahao Liu
Remote Sens. 2025, 17(16), 2805; https://doi.org/10.3390/rs17162805 - 13 Aug 2025
Abstract
Cross-modal semantic segmentation of individual tree LiDAR point clouds is critical for accurately characterizing tree attributes, quantifying ecological interactions, and estimating carbon storage. However, in forest environments, this task faces key challenges such as high annotation costs and poor cross-domain generalization. To address [...] Read more.
Cross-modal semantic segmentation of individual tree LiDAR point clouds is critical for accurately characterizing tree attributes, quantifying ecological interactions, and estimating carbon storage. However, in forest environments, this task faces key challenges such as high annotation costs and poor cross-domain generalization. To address these issues, this study proposes a cross-modal semantic transfer framework tailored for individual tree point cloud segmentation in forested scenes. Leveraging co-registered UAV-acquired RGB imagery and LiDAR data, we construct a technical pipeline of “2D semantic inference—3D spatial mapping—cross-modal fusion” to enable annotation-free semantic parsing of 3D individual trees. Specifically, we first introduce a novel Multi-Source Feature Fusion Network (MSFFNet) to achieve accurate instance-level segmentation of individual trees in the 2D image domain. Subsequently, we develop a hierarchical two-stage registration strategy to effectively align dense matched point clouds (MPC) generated from UAV imagery with LiDAR point clouds. On this basis, we propose a probabilistic cross-modal semantic transfer model that builds a semantic probability field through multi-view projection and the expectation–maximization algorithm. By integrating geometric features and semantic confidence, the model establishes semantic correspondences between 2D pixels and 3D points, thereby achieving spatially consistent semantic label mapping. This facilitates the transfer of semantic annotations from the 2D image domain to the 3D point cloud domain. The proposed method is evaluated on two forest datasets. The results demonstrate that the proposed individual tree instance segmentation approach achieves the highest performance, with an IoU of 87.60%, compared to state-of-the-art methods such as Mask R-CNN, SOLOV2, and Mask2Former. Furthermore, the cross-modal semantic label transfer framework significantly outperforms existing mainstream methods in individual tree point cloud semantic segmentation across complex forest scenarios. Full article
24 pages, 3617 KiB  
Article
A Comparison Between Unimodal and Multimodal Segmentation Models for Deep Brain Structures from T1- and T2-Weighted MRI
by Nicola Altini, Erica Lasaracina, Francesca Galeone, Michela Prunella, Vladimiro Suglia, Leonarda Carnimeo, Vito Triggiani, Daniele Ranieri, Gioacchino Brunetti and Vitoantonio Bevilacqua
Mach. Learn. Knowl. Extr. 2025, 7(3), 84; https://doi.org/10.3390/make7030084 - 13 Aug 2025
Abstract
Accurate segmentation of deep brain structures is critical for preoperative planning in such neurosurgical procedures as Deep Brain Stimulation (DBS). Previous research has showcased successful pipelines for segmentation from T1-weighted (T1w) Magnetic Resonance Imaging (MRI) data. Nevertheless, the role of T2-weighted (T2w) MRI [...] Read more.
Accurate segmentation of deep brain structures is critical for preoperative planning in such neurosurgical procedures as Deep Brain Stimulation (DBS). Previous research has showcased successful pipelines for segmentation from T1-weighted (T1w) Magnetic Resonance Imaging (MRI) data. Nevertheless, the role of T2-weighted (T2w) MRI data has been underexploited so far. This study proposes and evaluates a fully automated deep learning pipeline based on nnU-Net for the segmentation of eight clinically relevant deep brain structures. A heterogeneous dataset has been prepared by gathering 325 paired T1w and T2w MRI scans from eight publicly available sources, which have been annotated by means of an atlas-based registration approach. Three 3D nnU-Net models—unimodal T1w, unimodal T2w, and multimodal (encompassing both T1w and T2w)—have been trained and compared by using 5-fold cross-validation and a separate test set. The outcomes prove that the multimodal model consistently outperforms the T2w unimodal model and achieves comparable performance with the T1w unimodal model. On our dataset, all proposed models significantly exceed the performance of the state-of-the-art DBSegment tool. These findings underscore the value of multimodal MRI in enhancing deep brain segmentation and offer a robust framework for accurate delineation of subcortical targets in both research and clinical settings. Full article
(This article belongs to the Special Issue Deep Learning in Image Analysis and Pattern Recognition, 2nd Edition)
Show Figures

Figure 1

15 pages, 566 KiB  
Systematic Review
Efficacy of Oral Mucosal Grafting for Nasal, Septal, and Sinonasal Reconstruction: A Systematic Review of the Literature
by Marta Santiago Horcajada, Alvaro Sánchez Barrueco, William Aragonés Sanzen-Baker, Gonzalo Díaz Tapia, Ramón Moreno Luna, Felipe Villacampa Aubá, Carlos Cenjor Español and José Miguel Villacampa Aubá
Life 2025, 15(8), 1281; https://doi.org/10.3390/life15081281 - 13 Aug 2025
Abstract
Background: Reconstruction of nasal, septal, and nasosinusal defects is challenging when the native mucosa is absent or damaged. Oral mucosal grafts have been proposed as a reconstructive option due to their favorable biological properties, but their use in rhinology remains poorly defined. [...] Read more.
Background: Reconstruction of nasal, septal, and nasosinusal defects is challenging when the native mucosa is absent or damaged. Oral mucosal grafts have been proposed as a reconstructive option due to their favorable biological properties, but their use in rhinology remains poorly defined. Objective: To evaluate the clinical efficacy and technical characteristics of oral mucosal grafting for nasal, septal, nasosinusal, and skull base reconstruction. Data Sources: PubMed, Embase, Web of Science, and Cochrane Library were searched for studies published between January 2005 and May 2025. Study Eligibility Criteria: We included original human studies (case reports or series) reporting the use of free or pedicled oral mucosal grafts in nasal, septal, nasosinusal, or skull base reconstruction. Non-original studies, animal or preclinical studies, and articles not in English or Spanish were excluded. Methods of Review: One reviewer screened titles, abstracts, and full texts using Rayyan. Methodological quality was assessed using JBI tools for case reports and case series. A narrative synthesis was conducted due to clinical heterogeneity and absence of comparison groups. The resulting assessments were reviewed by the co-authors to confirm accuracy and resolve any potential discrepancies. Results: Of 467 records identified, 10 studies were included. All were case reports or series involving buccal, palatal, or labial mucosa. Most reported good graft integration, low complication rates, and favorable functional outcomes. No randomized studies or comparative analyses were found. Limitations: Included studies had small sample sizes, lacked control groups, and showed heterogeneous methods and follow-up. The certainty of evidence could not be formally assessed. Conclusions: Oral mucosal grafting is a promising reconstructive option in selected nasosinusal and skull base defects. However, stronger comparative studies are needed to determine its clinical superiority. Registration: This review was not registered in any public database. Full article
(This article belongs to the Special Issue New Trends in Otorhinolaryngology)
Show Figures

Figure 1

46 pages, 1676 KiB  
Review
Neural–Computer Interfaces: Theory, Practice, Perspectives
by Ignat Dubynin, Maxim Zemlyanskov, Irina Shalayeva, Oleg Gorskii, Vladimir Grinevich and Pavel Musienko
Appl. Sci. 2025, 15(16), 8900; https://doi.org/10.3390/app15168900 - 12 Aug 2025
Abstract
This review outlines the technological principles of neural–computer interface (NCI) construction, classifying them according to: (1) the degree of intervention (invasive, semi-invasive, and non-invasive); (2) the direction of signal communication, including BCI (brain–computer interface) for converting neural activity into commands for external devices, [...] Read more.
This review outlines the technological principles of neural–computer interface (NCI) construction, classifying them according to: (1) the degree of intervention (invasive, semi-invasive, and non-invasive); (2) the direction of signal communication, including BCI (brain–computer interface) for converting neural activity into commands for external devices, CBI (computer–brain interface) for translating artificial signals into stimuli for the CNS, and BBI (brain–brain interface) for direct brain-to-brain interaction systems that account for agency; and (3) the mode of user interaction with technology (active, reactive, passive). For each NCI type, we detail the fundamental data processing principles, covering signal registration, digitization, preprocessing, classification, encoding, command execution, and stimulation, alongside engineering implementations ranging from EEG/MEG to intracortical implants and from transcranial magnetic stimulation (TMS) to intracortical microstimulation (ICMS). We also review mathematical modeling methods for NCIs, focusing on optimizing the extraction of informative features from neural signals—decoding for BCI and encoding for CBI—followed by a discussion of quasi-real-time operation and the use of DSP and neuromorphic chips. Quantitative metrics and rehabilitation measures for evaluating NCI system effectiveness are considered. Finally, we highlight promising future research directions, such as the development of electrochemical interfaces, biomimetic hierarchical systems, and energy-efficient technologies capable of expanding brain functionality. Full article
(This article belongs to the Special Issue Brain-Computer Interfaces: Development, Applications, and Challenges)
Show Figures

Figure 1

16 pages, 1791 KiB  
Review
Use of Radiomics to Predict Adverse Outcomes in Patients with Pulmonary Embolism: A Scoping Review of an Unresolved Clinical Challenge
by Miguel Ángel Casado-Suela, Juan Torres-Macho, Jesús Prada-Alonso, Rodrigo Pastorín-Salis, Ana Martínez de la Casa-Muñoz, Eva Ruiz-Navío, Ana Bustamante-Fermosel and Anabel Franco-Moreno
Diagnostics 2025, 15(16), 2022; https://doi.org/10.3390/diagnostics15162022 - 12 Aug 2025
Abstract
Background: Inherent to the challenge of acute pulmonary embolism (APE), the breadth of presentation ranges from asymptomatic pulmonary emboli to sudden death. Risk stratification of patients with APE is mandatory for determining the appropriate therapeutic management approach. However, the optimal clinically most relevant [...] Read more.
Background: Inherent to the challenge of acute pulmonary embolism (APE), the breadth of presentation ranges from asymptomatic pulmonary emboli to sudden death. Risk stratification of patients with APE is mandatory for determining the appropriate therapeutic management approach. However, the optimal clinically most relevant combination of predictors of death remains to be determined. Radiomics is an emerging discipline in medicine that extracts and analyzes quantitative data from medical images using mathematical algorithms. In APE, these data can reveal thrombus characteristics that are not visible to the naked eye, which may help to more accurately identify patients at higher risk of early clinical deterioration or mortality. We conducted a scoping review to explore the current evidence on the prognostic performance of radiomic models in patients with APE. Methods: PubMed, Web of Science, EMBASE, and Scopus were searched for studies published between January 2010 and April 2025. Eligible studies evaluated the use of radiomics to predict adverse outcomes in patients with APE. The PROSPERO registration number is CRD420251083318. Results: Nine studies were included in this review. There was significant heterogeneity in the methodology for feature selection and model development. Radiomic models demonstrated variable performance across studies. Models that combined radiomic features with clinical data tended to show better predictive accuracy. Conclusions: This scoping review underscores the potential of radiomic models, particularly when combined with clinical data, to improve risk stratification in patients with APE. Full article
(This article belongs to the Special Issue The Applications of Radiomics in Precision Diagnosis)
Show Figures

Figure 1

24 pages, 3872 KiB  
Article
Practicality of Blockchain Technology for Land Registration: A Namibian Case Study
by Johannes Pandeni Paavo, Rafael Rodríguez-Puentes and Uchendu Eugene Chigbu
Land 2025, 14(8), 1626; https://doi.org/10.3390/land14081626 - 12 Aug 2025
Abstract
In the context of the information age, a land administration system must be technologically driven to manage land information and data transparently. This ensures the registration and protection of land rights for people. In this study, we present a Blockchain Land Registration system [...] Read more.
In the context of the information age, a land administration system must be technologically driven to manage land information and data transparently. This ensures the registration and protection of land rights for people. In this study, we present a Blockchain Land Registration system designed as a tool for enhancing land administration in South Saharan Africa (SSA). Drawing inspiration from Namibia, we have developed a user interface comprising a homepage/landing page, a users’ registration form, a login form that incorporates MetaMask authentication prompts, and an authenticated dashboard for landowners and purchasers. Design Science was employed as the methodology for this proposal. Being technical design research for solving a land administration problem (that of inefficient land registration), the technical solution identified involves system design, the development of blockchain integration and testing, and development aspects. Based on this approach, blockchain was conceptualised as an “artefact” that could be investigated as a technical solution to address the challenges posed by inefficient land registration. This study provides a comprehensive roadmap for the conceptualisation, development, validation, and deployment of a blockchain-based land titles registry suitable for SSA countries. It also explores a discussion on the practical and policy implications of blockchain in land administration in SSA countries. Full article
Show Figures

Figure 1

18 pages, 3407 KiB  
Article
Standalone AI Versus AI-Assisted Radiologists in Emergency ICH Detection: A Prospective, Multicenter Diagnostic Accuracy Study
by Anna N. Khoruzhaya, Polina A. Sakharova, Kirill M. Arzamasov, Elena I. Kremneva, Dmitriy V. Burenchev, Rustam A. Erizhokov, Olga V. Omelyanskaya, Anton V. Vladzymyrskyy and Yuriy A. Vasilev
J. Clin. Med. 2025, 14(16), 5700; https://doi.org/10.3390/jcm14165700 - 12 Aug 2025
Abstract
Background/Objectives. Intracranial hemorrhages (ICHs) require immediate diagnosis for optimal clinical outcomes. Artificial intelligence (AI) is considered a potential solution for optimizing neuroimaging under conditions of radiologist shortage and increasing workload. This study aimed to directly compare diagnostic effectiveness between standalone AI services and [...] Read more.
Background/Objectives. Intracranial hemorrhages (ICHs) require immediate diagnosis for optimal clinical outcomes. Artificial intelligence (AI) is considered a potential solution for optimizing neuroimaging under conditions of radiologist shortage and increasing workload. This study aimed to directly compare diagnostic effectiveness between standalone AI services and AI-assisted radiologists in detecting ICHs on brain CT. Methods. A prospective, multicenter comparative study was conducted in 67 medical organizations in Moscow over 15+ months (April 2022–December 2024). We analyzed 3409 brain CT studies containing 1101 ICH cases (32.3%). Three commercial AI services with state registration were compared with radiologist conclusions formulated with access to AI results as auxiliary tools. Statistical analysis included McNemar’s test for paired data and Cohen’s h effect size analysis. Results. Radiologists with AI assistance statistically significantly outperformed AI services across all diagnostic metrics (p < 0.001): sensitivity 98.91% vs. 95.91%, specificity 99.83% vs. 87.35%, and accuracy 99.53% vs. 90.11%. The radiologists’ diagnostic odds ratio exceeded that of AI by 323-fold. The critical difference was in false-positive rates: 293 cases for AI vs. 4 for radiologists (73-fold increase). Complete complementarity of ICH misses was observed: all 12 cases undetected by radiologists were identified by AI, while all 45 cases missed by AI were diagnosed by radiologists. Agreement between methods was 89.6% (Cohen’s kappa 0.776). Conclusions. Radiologists maintain their role as the gold standard in ICH diagnosis, significantly outperforming AI services. Error complementarity indicates potential for improvement through systematic integration of AI as a “second reader” rather than a primary diagnostic tool. However, the high false-positive rate of standalone AI requires substantial algorithm refinement. The optimal implementation strategy involves using AI as an auxiliary tool within radiologist workflows rather than as an autonomous diagnostic system, with potential for delayed verification protocols to maximize diagnostic sensitivity while managing the false-positive burden. Full article
(This article belongs to the Special Issue Neurocritical Care: Clinical Advances and Practice Updates)
Show Figures

Figure 1

13 pages, 1609 KiB  
Article
A Decision-Making Method for Photon/Proton Selection for Nasopharyngeal Cancer Based on Dose Prediction and NTCP
by Guiyuan Li, Xinyuan Chen, Jialin Ding, Linyi Shen, Mengyang Li, Junlin Yi and Jianrong Dai
Cancers 2025, 17(16), 2620; https://doi.org/10.3390/cancers17162620 - 11 Aug 2025
Viewed by 108
Abstract
Introduction: Decision-making regarding radiotherapy techniques for patients with nasopharyngeal cancer requires a comparison of photon and proton plans generated using planning software, which requires time and expertise. We developed a fully automated decision tool to select patients for proton therapy that predicts [...] Read more.
Introduction: Decision-making regarding radiotherapy techniques for patients with nasopharyngeal cancer requires a comparison of photon and proton plans generated using planning software, which requires time and expertise. We developed a fully automated decision tool to select patients for proton therapy that predicts proton therapy (XT) and photon therapy (PT) dose distributions using only patient CT image data, predicts xerostomia and dysphagia probability using predicted critical organ mean doses, and makes decisions based on the Netherlands’ National Indication Protocol Proton therapy (NIPP) to select patients likely to benefit from proton therapy. Methods: This study used 48 nasopharyngeal patients treated at the Cancer Hospital of the Chinese Academy of Medical Sciences. We manually generated a photon plan and a proton plan for each patient. Based on this dose distribution, photon and proton dose prediction models were trained using deep learning (DL) models. We used the NIPP model to measure xerostomia levels 2 and 3, dysphagia levels 2 and 3, and decisions were made according to the thresholds given by this protocol. Results: The predicted doses for both photon and proton groups were comparable to those for manual plan (MP). The Mean Absolute Error (MAE) for each organ at risk in the photon and proton plans did not exceed 5% and showed a good performance of the dose prediction model. For proton, the normal tissue complication probability (NTCP) of xerostomia and dysphagia performed well, p > 0.05. There was no statistically significant difference. For photon, the NTCP of dysphagia performed well, p > 0.05. For xerostomia p < 0.05 but the absolute deviation was 0.85% and 0.75%, which would not have a great impact on the prediction result. Among the 48 patients’ decisions, 3 were wrong, and the correct rate was 93.8%. The area under curve (AUC) of operating characteristic curve (ROC) was 0.86, showing the good performance of the decision-making tool in this study. Conclusions: The decision tool based on DL and NTCP models can accurately select nasopharyngeal cancer patients who will benefit from proton therapy. The time spent generating comparison plans is reduced and the diagnostic efficiency of doctors is improved, and the tool can be shared with centers that do not have proton expertise. Trial registration: This study was a retrospective study, so it was exempt from registration. Full article
(This article belongs to the Special Issue Proton Therapy of Cancer Treatment)
Show Figures

Figure 1

33 pages, 1110 KiB  
Systematic Review
Efficacy of Nurse-Led and Multidisciplinary Self-Management Programmes for Heart Failure with Reduced Ejection Fraction: An Umbrella Systematic Review
by Pupalan Iyngkaran, Taksh Patel, Diana Asadi, Iqra Siddique, Bhawna Gupta, Maximilian de Courten and Fahad Hanna
Biomedicines 2025, 13(8), 1955; https://doi.org/10.3390/biomedicines13081955 - 11 Aug 2025
Viewed by 108
Abstract
Background: Chronic disease self-management (CDSM) programmes are widely recommended for heart failure with reduced ejection fraction (HFrEF), yet evidence on their effectiveness remains mixed. This systematic review synthesises the evidence and critically appraises the findings from multiple systematic reviews on CDSM for congestive [...] Read more.
Background: Chronic disease self-management (CDSM) programmes are widely recommended for heart failure with reduced ejection fraction (HFrEF), yet evidence on their effectiveness remains mixed. This systematic review synthesises the evidence and critically appraises the findings from multiple systematic reviews on CDSM for congestive heart failure (CHF) with a focus on the impact of nurse-led and multidisciplinary CDSM interventions in adults with HFrEF. Design: Systematic review using PRISMA 2020 and AMSTAR-2 guidelines. Data Sources and Eligibility: We searched MEDLINE, Embase, CINAHL, Cochrane Library, and other sources for reviews published from 2012 to 2024. Included were systematic reviews of CDSM interventions for adults diagnosed with HFrEF, focusing on mortality, hospital readmissions, quality of life, and self-management behaviours. Results: A total of 1050 studies were screened, with 60 studies being counted in the final analysis, including 22 reviews of high quality. Evidence for mortality benefit was limited and inconsistent across reviews. However, moderate-to-high-certainty evidence showed that nurse-led CDSM interventions improved hospital readmission rates and health-related quality of life (HRQoL). Improvements in self-management behaviours such as medication adherence and symptom monitoring were also frequently reported. Conclusions: While evidence for a mortality benefit remains inconclusive, this review highlights consistent benefits of nurse-led CDSM interventions in reducing readmissions and improving HRQoL for HFrEF patients. Future research should prioritise standardised outcome reporting, incorporate economic evaluations, and explore patient-centred and culturally tailored approaches to intervention design. PROSPERO registration number CRD42023431539. Full article
(This article belongs to the Special Issue Heart Failure: New Diagnostic and Therapeutic Approaches)
Show Figures

Figure 1

29 pages, 1827 KiB  
Article
One-Step Enhancement Method for Data Registration Based on the Lidargrammetric Approach
by Antoni Rzonca and Mariusz Twardowski
Remote Sens. 2025, 17(16), 2774; https://doi.org/10.3390/rs17162774 - 11 Aug 2025
Viewed by 141
Abstract
The present paper introduces a novel methodology for LiDAR point transformation and adjustment, grounded in two primary concepts. In the initial phase of the process, LiDAR data are mapped onto synthetic images, known as lidargrams, through the utilization of exterior orientation parameters (EOPs) [...] Read more.
The present paper introduces a novel methodology for LiDAR point transformation and adjustment, grounded in two primary concepts. In the initial phase of the process, LiDAR data are mapped onto synthetic images, known as lidargrams, through the utilization of exterior orientation parameters (EOPs) of a virtual camera. Secondly, unique lidargram point identifiers (ULPIs) are assigned to each LiDAR point, ensuring the preservation of the relationship between specific LiDAR points and their corresponding lidargram projections. This process facilitates the reconstruction of ground points from their respective projections. The integration of these concepts facilitates the alignment and adjustment of blocks of lidargrams, thereby enabling the estimation of novel EOPs. The exchange of arbitrary EOPs and the intersection of the transformed point cloud based on the ULPIs are facilitated by these refined EOPs. The LiDAR data undergo a three-dimensional transformation using photogrammetric algorithms. This is in accordance with the fundamental principles of lidargrammetry. The accuracy of the new approach and its implementation in a research tool were verified on a range of data types, encompassing synthetic, semisynthetic, and real data. By evaluating the approach across a wide range of data sources, the authors were able to assess its effectiveness and reliability in different scenarios. The method’s flexibility is evidenced by its ability to reduce the final 3D root mean square error of discrepancies measured at check points by 30 times in synthetic data tests, 12 times in semisynthetic data tests, and 96 times in real data tests. The quantitative results obtained provide substantial support for the validity of the presented methodology. The efficacy of the proposed method was also evaluated by way of a comparative analysis with a selection of widely utilized LiDAR processing software developed by TerraSolid Ltd. Full article
(This article belongs to the Section Engineering Remote Sensing)
Show Figures

Graphical abstract

27 pages, 3200 KiB  
Article
IoT-Enhanced Multi-Base Station Networks for Real-Time UAV Surveillance and Tracking
by Zhihua Chen, Tao Zhang and Tao Hong
Drones 2025, 9(8), 558; https://doi.org/10.3390/drones9080558 - 8 Aug 2025
Viewed by 202
Abstract
The proliferation of small, agile unmanned aerial vehicles (UAVs) has exposed the limits of single-sensor surveillance in cluttered airspace. We propose an Internet of Things-enabled integrated sensing and communication (IoT-ISAC) framework that converts cellular base stations into cooperative, edge-intelligent sensing nodes. Within a [...] Read more.
The proliferation of small, agile unmanned aerial vehicles (UAVs) has exposed the limits of single-sensor surveillance in cluttered airspace. We propose an Internet of Things-enabled integrated sensing and communication (IoT-ISAC) framework that converts cellular base stations into cooperative, edge-intelligent sensing nodes. Within a four-layer design—terminal, edge, IoT platform, and cloud—stations exchange raw echoes and low-level features in real time, while adaptive beam registration and cross-correlation timing mitigate spatial and temporal misalignments. A hybrid processing pipeline first produces coarse data-level estimates and then applies symbol-level refinements, sustaining rapid response without sacrificing precision. Simulation evaluations using multi-band ISAC waveforms confirm high detection reliability, sub-frame latency, and energy-aware operation in dense urban clutter, adverse weather, and multi-target scenarios. Preliminary hardware tests validate the feasibility of the proposed signal processing approach. Simulation analysis demonstrates detection accuracy of 85–90% under optimal conditions with processing latency of 15–25 ms and potential energy efficiency improvement of 10–20% through cooperative operation, pending real-world validation. By extending coverage, suppressing blind zones, and supporting dynamic surveillance of fast-moving UAVs, the proposed system provides a scalable path toward smart city air safety networks, cooperative autonomous navigation aids, and other remote-sensing applications that require agile, coordinated situational awareness. Full article
(This article belongs to the Section Drone Communications)
Show Figures

Figure 1

30 pages, 6195 KiB  
Article
Digital Inspection Technology for Sheet Metal Parts Using 3D Point Clouds
by Jian Guo, Dingzhong Tan, Shizhe Guo, Zheng Chen and Rang Liu
Sensors 2025, 25(15), 4827; https://doi.org/10.3390/s25154827 - 6 Aug 2025
Viewed by 232
Abstract
To solve the low efficiency of traditional sheet metal measurement, this paper proposes a digital inspection method for sheet metal parts based on 3D point clouds. The 3D point cloud data of sheet metal parts are collected using a 3D laser scanner, and [...] Read more.
To solve the low efficiency of traditional sheet metal measurement, this paper proposes a digital inspection method for sheet metal parts based on 3D point clouds. The 3D point cloud data of sheet metal parts are collected using a 3D laser scanner, and the topological relationship is established by using a K-dimensional tree (KD tree). The pass-through filtering method is adopted to denoise the point cloud data. To preserve the fine features of the parts, an improved voxel grid method is proposed for the downsampling of the point cloud data. Feature points are extracted via the intrinsic shape signatures (ISS) algorithm and described using the fast point feature histograms (FPFH) algorithm. After rough registration with the sample consensus initial alignment (SAC-IA) algorithm, an initial position is provided for fine registration. The improved iterative closest point (ICP) algorithm, used for fine registration, can enhance the registration accuracy and efficiency. The greedy projection triangulation algorithm optimized by moving least squares (MLS) smoothing ensures surface smoothness and geometric accuracy. The reconstructed 3D model is projected onto a 2D plane, and the actual dimensions of the parts are calculated based on the pixel values of the sheet metal parts and the conversion scale. Experimental results show that the measurement error of this inspection system for three sheet metal workpieces ranges from 0.1416 mm to 0.2684 mm, meeting the accuracy requirement of ±0.3 mm. This method provides a reliable digital inspection solution for sheet metal parts. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

23 pages, 3055 KiB  
Article
A Markerless Approach for Full-Body Biomechanics of Horses
by Sarah K. Shaffer, Omar Medjaouri, Brian Swenson, Travis Eliason and Daniel P. Nicolella
Animals 2025, 15(15), 2281; https://doi.org/10.3390/ani15152281 - 5 Aug 2025
Viewed by 485
Abstract
The ability to quantify equine kinematics is essential for clinical evaluation, research, and performance feedback. However, current methods are challenging to implement. This study presents a motion capture methodology for horses, where three-dimensional, full-body kinematics are calculated without instrumentation on the animal, offering [...] Read more.
The ability to quantify equine kinematics is essential for clinical evaluation, research, and performance feedback. However, current methods are challenging to implement. This study presents a motion capture methodology for horses, where three-dimensional, full-body kinematics are calculated without instrumentation on the animal, offering a more scalable and labor-efficient approach when compared with traditional techniques. Kinematic trajectories are calculated from multi-camera video data. First, a neural network identifies skeletal landmarks (markers) in each camera view and the 3D location of each marker is triangulated. An equine biomechanics model is scaled to match the subject’s shape, using segment lengths defined by markers. Finally, inverse kinematics (IK) produces full kinematic trajectories. We test this methodology on a horse at three gaits. Multiple neural networks (NNs), trained on different equine datasets, were evaluated. All networks predicted over 78% of the markers within 25% of the length of the radius bone on test data. Root-mean-square-error (RMSE) between joint angles predicted via IK using ground truth marker-based motion capture data and network-predicted data was less than 10 degrees for 25 to 32 of 35 degrees of freedom, depending on the gait and data used for network training. NNs trained over a larger variety of data improved joint angle RMSE and curve similarity. Marker prediction error, the average distance between ground truth and predicted marker locations, and IK marker error, the distance between experimental and model markers, were used to assess network, scaling, and registration errors. The results demonstrate the potential of markerless motion capture for full-body equine kinematic analysis. Full article
(This article belongs to the Special Issue Advances in Equine Sports Medicine, Therapy and Rehabilitation)
Show Figures

Figure 1

16 pages, 838 KiB  
Article
A Scintillation Hodoscope for Measuring the Flux of Cosmic Ray Muons at the Tien Shan High Mountain Station
by Alexander Shepetov, Aliya Baktoraz, Orazaly Kalikulov, Svetlana Mamina, Yerzhan Mukhamejanov, Kanat Mukashev, Vladimir Ryabov, Nurzhan Saduyev, Turlan Sadykov, Saken Shinbulatov, Tairzhan Skokbayev, Ivan Sopko, Shynbolat Utey, Ludmila Vildanova, Nurzhan Yerezhep and Valery Zhukov
Particles 2025, 8(3), 73; https://doi.org/10.3390/particles8030073 - 4 Aug 2025
Viewed by 149
Abstract
For further investigation of the properties of the muon component in the core regions of extensive air showers (EASs), a new underground hodoscopic set-up with a total sensitive area of 22 m2 was built at the Tien Shan High Mountain Cosmic Ray [...] Read more.
For further investigation of the properties of the muon component in the core regions of extensive air showers (EASs), a new underground hodoscopic set-up with a total sensitive area of 22 m2 was built at the Tien Shan High Mountain Cosmic Ray Station. The hodoscope is based on a set of large-sized scintillation charged particle detectors with an output signal of analog type. The installation ensures a (5–8) GeV energy threshold of muon registration and a ∼104 dynamic range for the measurement of the density of muon flux. A program facility was designed that uses modern machine learning techniques for automated search for the typical scintillation pulse pattern in an oscillogram of a noisy analog signal at the output of the hodoscope detector. The program provides a ∼99% detection probability of useful signals, with a relative share of false positives below 1%, and has a sufficient operation speed for real-time analysis of incoming data. Complete verification of the hardware and software tools was performed under realistic operation conditions, and the results obtained demonstrate the correctness of the proposed method and its practical applicability to the investigation of the muon flux in EASs. In the course of the installation testing, a preliminary physical result was obtained concerning the rise of the multiplicity of muon particles around an EAS core in dependence on the primary EAS energy. Full article
(This article belongs to the Section Experimental Physics and Instrumentation)
Show Figures

Figure 1

16 pages, 13514 KiB  
Article
Development of a High-Speed Time-Synchronized Crop Phenotyping System Based on Precision Time Protoco
by Runze Song, Haoyu Liu, Yueyang Hu, Man Zhang and Wenyi Sheng
Appl. Sci. 2025, 15(15), 8612; https://doi.org/10.3390/app15158612 - 4 Aug 2025
Viewed by 199
Abstract
Aiming to address the problems of asynchronous acquisition time of multiple sensors in the crop phenotype acquisition system and high cost of the acquisition equipment, this paper developed a low-cost crop phenotype synchronous acquisition system based on the PTP synchronization protocol, realizing the [...] Read more.
Aiming to address the problems of asynchronous acquisition time of multiple sensors in the crop phenotype acquisition system and high cost of the acquisition equipment, this paper developed a low-cost crop phenotype synchronous acquisition system based on the PTP synchronization protocol, realizing the synchronous acquisition of three types of crop data: visible light images, thermal infrared images, and laser point clouds. The paper innovatively proposed the Difference Structural Similarity Index Measure (DSSIM) index, combined with statistical indicators (average point number difference, average coordinate error), distribution characteristic indicators (Charm distance), and Hausdorff distance to characterize the stability of the system. After 72 consecutive hours of synchronization testing on the timing boards, it was verified that the root mean square error of the synchronization time for each timing board reached the ns level. The synchronous trigger acquisition time for crop parameters under time synchronization was controlled at the microsecond level. Using pepper as the crop sample, 133 consecutive acquisitions were conducted. The acquisition success rate for the three phenotypic data types of pepper samples was 100%, with a DSSIM of approximately 0.96. The average point number difference and average coordinate error were both about 3%, while the Charm distance and Hausdorff distance were only 1.14 mm and 5 mm. This system can provide hardware support for multi-parameter acquisition and data registration in the fast mobile crop phenotype platform, laying a reliable data foundation for crop growth monitoring, intelligent yield analysis, and prediction. Full article
(This article belongs to the Special Issue Smart Farming: Internet of Things (IoT)-Based Sustainable Agriculture)
Show Figures

Figure 1

Back to TopTop