Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (340)

Search Parameters:
Keywords = hand-held imager

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 1227 KiB  
Article
Reliability and Inter-Device Agreement Between a Portable Handheld Ultrasound Scanner and a Conventional Ultrasound System for Assessing the Thickness of the Rectus Femoris and Vastus Intermedius
by Carlante Emerson, Hyun K. Kim, Brian A. Irving and Efthymios Papadopoulos
J. Funct. Morphol. Kinesiol. 2025, 10(3), 299; https://doi.org/10.3390/jfmk10030299 - 1 Aug 2025
Viewed by 78
Abstract
Background: Ultrasound (U/S) can be used to evaluate skeletal muscle characteristics in clinical and sports settings. Handheld U/S devices have recently emerged as a cheaper and portable alternative to conventional U/S systems. However, further research is warranted on their reliability. We assessed [...] Read more.
Background: Ultrasound (U/S) can be used to evaluate skeletal muscle characteristics in clinical and sports settings. Handheld U/S devices have recently emerged as a cheaper and portable alternative to conventional U/S systems. However, further research is warranted on their reliability. We assessed the reliability and inter-device agreement between a handheld U/S device (Clarius L15 HD3) and a more conventional U/S system (GE LOGIQ e) for measuring the thickness of the rectus femoris (RF) and vastus intermedius (VI). Methods: Cross-sectional images of the RF and VI muscles were obtained in 20 participants by two assessors, and on two separate occasions by one of those assessors, using the Clarius L15 HD3 and GE LOGIQ e devices. RF and VI thickness measurements were obtained to determine the intra-rater reliability, inter-rater reliability, and inter-device agreement. Results: All intraclass correlation coefficients (ICCs) were above 0.9 for intra-rater reliability (range: 0.94 to 0.97), inter-rater reliability (ICC: 0.97), and inter-device agreement (ICC: 0.98) when comparing the two devices in assessing RF and VI thickness. For the RF, the Bland–Altman plot revealed a mean difference of 0.06 ± 0.07 cm, with limits of agreement ranging from 0.21 to −0.09, whereas for the VI, the Bland–Altman plot showed a mean difference of 0.07 ± 0.10 cm, with limits of agreement ranging from 0.27 to −0.13. Conclusions: The handheld Clarius L15 HD3 was reliable and demonstrated high agreement with the more conventional GE LOGIQ e for assessing the thickness of the RF and VI in young, healthy adults. Full article
(This article belongs to the Section Kinesiology and Biomechanics)
Show Figures

Figure 1

29 pages, 10358 KiB  
Article
Smartphone-Based Sensing System for Identifying Artificially Marbled Beef Using Texture and Color Analysis to Enhance Food Safety
by Hong-Dar Lin, Yi-Ting Hsieh and Chou-Hsien Lin
Sensors 2025, 25(14), 4440; https://doi.org/10.3390/s25144440 - 16 Jul 2025
Viewed by 295
Abstract
Beef fat injection technology, used to enhance the perceived quality of lower-grade meat, often results in artificially marbled beef that mimics the visual traits of Wagyu, characterized by dense fat distribution. This practice, driven by the high cost of Wagyu and the affordability [...] Read more.
Beef fat injection technology, used to enhance the perceived quality of lower-grade meat, often results in artificially marbled beef that mimics the visual traits of Wagyu, characterized by dense fat distribution. This practice, driven by the high cost of Wagyu and the affordability of fat-injected beef, has led to the proliferation of mislabeled “Wagyu-grade” products sold at premium prices, posing potential food safety risks such as allergen exposure or consumption of unverified additives, which can adversely affect consumer health. Addressing this, this study introduces a smart sensing system integrated with handheld mobile devices, enabling consumers to capture beef images during purchase for real-time health-focused assessment. The system analyzes surface texture and color, transmitting data to a server for classification to determine if the beef is artificially marbled, thus supporting informed dietary choices and reducing health risks. Images are processed by applying a region of interest (ROI) mask to remove background noise, followed by partitioning into grid blocks. Local binary pattern (LBP) texture features and RGB color features are extracted from these blocks to characterize surface properties of three beef types (Wagyu, regular, and fat-injected). A support vector machine (SVM) model classifies the blocks, with the final image classification determined via majority voting. Experimental results reveal that the system achieves a recall rate of 95.00% for fat-injected beef, a misjudgment rate of 1.67% for non-fat-injected beef, a correct classification rate (CR) of 93.89%, and an F1-score of 95.80%, demonstrating its potential as a human-centered healthcare tool for ensuring food safety and transparency. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

12 pages, 1687 KiB  
Article
AI-Assisted LVEF Assessment Using a Handheld Ultrasound Device: A Single-Center Comparative Study Against Cardiac Magnetic Resonance Imaging
by Giovanni Bisignani, Lorenzo Volpe, Andrea Madeo, Riccardo Vico, Davide Bencardino and Silvana De Bonis
J. Clin. Med. 2025, 14(13), 4708; https://doi.org/10.3390/jcm14134708 - 3 Jul 2025
Viewed by 454
Abstract
Background/Objectives: Two-dimensional echocardiography (2D echo) is widely used for assessing left ventricular ejection fraction (LVEF). This single-center comparative study aims to evaluate the accuracy of LVEF measurements obtained using the AI-assisted handheld ultrasound device Kosmos against cardiac magnetic resonance (CMR), the current gold [...] Read more.
Background/Objectives: Two-dimensional echocardiography (2D echo) is widely used for assessing left ventricular ejection fraction (LVEF). This single-center comparative study aims to evaluate the accuracy of LVEF measurements obtained using the AI-assisted handheld ultrasound device Kosmos against cardiac magnetic resonance (CMR), the current gold standard. Methods: A total of 49 adult patients undergoing clinically indicated CMR were prospectively enrolled. AI-based LVEF measurements were compared with CMR using the Wilcoxon signed-rank test, Pearson correlation, multivariable linear regression, and Bland–Altman analysis. All analyses were performed using STATA v18.0. Results: Median LVEF was 57% (CMR) vs. 55% (AI-Echo), with no significant difference (p = 0.51). Strong correlation (r = 0.99) and minimal bias (1.1%) were observed. Conclusions: The Kosmos AI-based autoEF algorithm demonstrated excellent agreement with CMR-derived LVEF values. Its speed and automation make it promising for bedside assessment in emergency departments, intensive care units, and outpatient clinics. This study aims to fill the gap in current clinical evidence by evaluating, for the first time, the agreement between LVEF measurements obtained via Kosmos’ AI-assisted autoEF and those from cardiac MRI (CMR), the gold standard for ventricular function assessment. This comparison is critical for validating the reliability of portable AI-driven echocardiographic tools in real-world clinical practice. However, these findings derive from a selected population at a single Italian center and should be validated in larger, diverse cohorts before assuming global generalizability. Full article
Show Figures

Figure 1

24 pages, 9205 KiB  
Article
Estimation of Canopy Chlorophyll Content of Apple Trees Based on UAV Multispectral Remote Sensing Images
by Juxia Wang, Yu Zhang, Fei Han, Zhenpeng Shi, Fu Zhao, Fengzi Zhang, Weizheng Pan, Zhiyong Zhang and Qingliang Cui
Agriculture 2025, 15(12), 1308; https://doi.org/10.3390/agriculture15121308 - 18 Jun 2025
Cited by 1 | Viewed by 475
Abstract
The chlorophyll content is an important index reflecting the growth status and nutritional level of plants. The rapid, accurate and nondestructive monitoring of the SPAD content of apple trees can provide a basis for large-scale monitoring and scientific management of the growth status [...] Read more.
The chlorophyll content is an important index reflecting the growth status and nutritional level of plants. The rapid, accurate and nondestructive monitoring of the SPAD content of apple trees can provide a basis for large-scale monitoring and scientific management of the growth status of apple trees. In this study, the canopy leaves of apple trees at different growth stages in the same year were taken as the research object, and remote sensing images of fruit trees in different growth stages (flower-falling stage, fruit-setting stage, fruit expansion stage, fruit-coloring stage and fruit-maturing stage) were acquired via a DJI MAVIC 3 multispectral unmanned aerial vehicle (UAV). Then, the spectral reflectance was extracted to calculate 15 common vegetation indexes as eigenvalues, the 5 vegetation indexes with the highest correlation were screened out through Pearson correlation analysis as the feature combination, and the measured SPAD values in the leaves of the fruit trees were gained using a handheld chlorophyll meter in the same stages. The estimation models for the SPAD values in different growth stages were, respectively, established through five machine learning algorithms: multiple linear regression (MLR), partial least squares regression (PLSR), support vector regression (SVR), random forest (RF) and extreme gradient boosting (XGBoost). Additionally, the model performance was assessed by selecting the coefficient of determination (R2), root mean square error (RMSE) and mean absolute error (MAE). The results show that the SPAD estimation results vary from stage to stage, where the best estimation model for the flower-falling stage, fruit-setting stage and fruit-maturing stage is RF and those for the fruit expansion stage and fruit-coloring stage are PLSR and MLR, respectively. Among the estimation models in the different growth stages, the model accuracy for the fruit expansion stage is the highest, with R2 = 0.787, RMSE = 0.87 and MAE = 0.644. The RF model, which outperforms the other models in terms of the prediction effect in multiple growth stages, can effectively predict the SPAD value in the leaves of apple trees and provide a reference for the growth status monitoring and precise management of orchards. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

12 pages, 4292 KiB  
Article
Machine Learning-Based Identification of Plastic Types Using Handheld Spectrometers
by Hedde van Hoorn, Fahimeh Pourmohammadi, Arie-Willem de Leeuw, Amey Vasulkar, Jerry de Vos and Steven van den Berg
Sensors 2025, 25(12), 3777; https://doi.org/10.3390/s25123777 - 17 Jun 2025
Viewed by 468
Abstract
Plastic waste and pollution is growing rapidly worldwide and most plastics end up in landfill or are incinerated because high-quality recycling is not possible. Plastic-type identification with a low-cost, handheld spectral approach could help in parts of the world where high-end spectral imaging [...] Read more.
Plastic waste and pollution is growing rapidly worldwide and most plastics end up in landfill or are incinerated because high-quality recycling is not possible. Plastic-type identification with a low-cost, handheld spectral approach could help in parts of the world where high-end spectral imaging systems on conveyor belts cannot be implemented. Here, we investigate how two fundamentally different handheld infrared spectral devices can identify plastic types by benchmarking the same analysis against a high-resolution bench-top spectral approach. We used the handheld Plastic Scanner, which measures a discrete infrared spectrum using LED illumination at different wavelengths, and the SpectraPod, which has an integrated photonics chip which has varying responsivity in different channels in the near-infrared. We employ machine learning using SVM, XGBoost, Random Forest and Gaussian Naïve Bayes models on a full dataset of plastic samples of PET, HDPE, PVC, LDPE, PP and PS, with samples of varying shape, color and opacity, as measured with three different experimental approaches. The high-resolution spectral approach can obtain an accuracy (mean ± standard deviation) of (0.97 ± 0.01), whereas we obtain (0.93 ± 0.01) for the SpectraPod and (0.70 ± 0.03) for the Plastic Scanner. Differences of reflectance at subsequent wavelengths prove to be the most important features in the plastic-type classification model when using high-resolution spectroscopy, which is not possible with the other two devices. Lower accuracy for the handheld devices is caused by their limitations, as the spectral range of both devices is limited—up to 1600 nm for the SpectraPod, while the Plastic Scanner has limited sensitivity to reflectance at wavelengths of 1100 and 1350 nm, where certain plastic types show characteristic absorbance bands. We suggest that combining selective sensitivity channels (as in the SpectraPod) and illuminating the sample with varying LEDs (as with the Plastic Scanner) could increase the accuracy in plastic-type identification with a handheld device. Full article
(This article belongs to the Special Issue Advanced Optical Sensors Based on Machine Learning: 2nd Edition)
Show Figures

Figure 1

28 pages, 1707 KiB  
Review
Video Stabilization: A Comprehensive Survey from Classical Mechanics to Deep Learning Paradigms
by Qian Xu, Qian Huang, Chuanxu Jiang, Xin Li and Yiming Wang
Modelling 2025, 6(2), 49; https://doi.org/10.3390/modelling6020049 - 17 Jun 2025
Viewed by 953
Abstract
Video stabilization is a critical technology for enhancing video quality by eliminating or reducing image instability caused by camera shake, thereby improving the visual viewing experience. It has deeply integrated into diverse applications—including handheld recording, UAV aerial photography, and vehicle-mounted surveillance. Propelled by [...] Read more.
Video stabilization is a critical technology for enhancing video quality by eliminating or reducing image instability caused by camera shake, thereby improving the visual viewing experience. It has deeply integrated into diverse applications—including handheld recording, UAV aerial photography, and vehicle-mounted surveillance. Propelled by advances in deep learning, data-driven stabilization methods have emerged as prominent solutions, demonstrating superior efficacy in handling jitter while achieving enhanced processing efficiency. This review systematically examines the field of video stabilization. First, this paper delineates the paradigm shift from classical to deep learning-based approaches. Subsequently, it elucidates conventional digital stabilization frameworks and their deep learning counterparts along with establishing standardized assessment metrics and benchmark datasets for comparative analysis. Finally, this review addresses critical challenges such as robustness limitations in complex motion scenarios and latency constraints in real-time processing. By integrating interdisciplinary perspectives, this work provides scholars with academically rigorous and practically relevant insights to advance video stabilization research. Full article
Show Figures

Graphical abstract

23 pages, 5424 KiB  
Article
Interactive Maintenance of Space Station Devices Using Scene Semantic Segmentation
by Haoting Liu, Chuanxin Liao, Xikang Li, Zhen Tian, Mengmeng Wang, Haiguang Li, Xiaofei Lu, Zhenhui Guo and Qing Li
Aerospace 2025, 12(6), 542; https://doi.org/10.3390/aerospace12060542 - 15 Jun 2025
Viewed by 334
Abstract
A novel interactive maintenance method for space station in-orbit devices using scene semantic segmentation technology is proposed. First, a wearable and handheld system is designed to capture images from the astronaut in the space station’s front view scene and play these images on [...] Read more.
A novel interactive maintenance method for space station in-orbit devices using scene semantic segmentation technology is proposed. First, a wearable and handheld system is designed to capture images from the astronaut in the space station’s front view scene and play these images on a handheld terminal in real-time. Second, the proposed system quantitatively evaluates the environmental lighting condition in the scene by calculating image quality evaluation parameters. If the lighting condition is not proper, a prompt message will be given to the astronaut to remind him or her to adjust the environment illumination. Third, our system adopts an improved DeepLabV3+ network for semantic segmentation of these astronauts’ forward view scene images. Regarding the improved network, the original backbone network is replaced with a lightweight convolutional neural network, i.e., the MobileNetV2, with a smaller model scale and computational complexity. The convolutional block attention module (CBAM) is introduced to improve the network’s feature perception ability. The atrous spatial pyramid pooling (ASPP) module is also considered to enable an accurate calculation of encoding multi-scale information. Extensive simulation experiment results indicate that the accuracy, precision, and average intersection over the union of the proposed algorithm can be better than 95.0%, 96.0%, and 89.0%, respectively. And the ground application experiments have also shown that our proposed technique can effectively shorten the working time of the system user. Full article
(This article belongs to the Section Astronautics & Space Science)
Show Figures

Figure 1

15 pages, 5363 KiB  
Article
Compact and Handheld SiPM-Based Gamma Camera for Radio-Guided Surgery and Medical Imaging
by Fabio Acerbi, Aramis Raiola, Cyril Alispach, Hossein Arabi, Habib Zaidi, Alberto Gola and Domenico Della Volpe
Instruments 2025, 9(2), 14; https://doi.org/10.3390/instruments9020014 - 15 Jun 2025
Viewed by 596
Abstract
In the continuous pursuit of minimally invasive interventions while ensuring a radical excision of lesions, Radio-Guided Surgery (RGS) has been for years the standard for image-guided surgery procedures, such as the Sentinel Lymph Node biopsy (SLN), Radio-guided Seed Localization (RSL), etc. In RGS, [...] Read more.
In the continuous pursuit of minimally invasive interventions while ensuring a radical excision of lesions, Radio-Guided Surgery (RGS) has been for years the standard for image-guided surgery procedures, such as the Sentinel Lymph Node biopsy (SLN), Radio-guided Seed Localization (RSL), etc. In RGS, the lesion has to be identified precisely, in terms of position and extension. In such a context, going beyond the current one-point probes, introducing portable but high-resolution cameras, handholdable by the surgeon, would be highly beneficial. We developed and tested a novel compact, low-power, handheld gamma camera for radio-guided surgery. This is based on a particular position-sensitive Silicon Photomultiplier (SiPM) technology—the FBK linearly graded SiPM (LG-SiPM). Within the camera, the photodetector is made up of a 3 × 3 array of 10 × 10 mm2 SiPM chips having a total area of more than 30 × 30 mm2. This is coupled with a pixelated scintillator and a parallel-hole collimator. With the LG-SiPM technology, it is possible to significantly reduce the number of readout channels to just eight, simplifying the complexity and lowering the power consumption of the readout electronics while still preserving a good position resolution. The novel gamma camera is light (weight), and it is made to be a fully stand-alone system, therefore featuring wireless communication, battery power, and wireless recharge capabilities. We designed, simulated (electrically), and tested (functionally) the first prototypes of the novel gamma camera. We characterized the intrinsic position resolution (tested with pulsed light) as being ~200 µm, and the sensitivity and resolution when detecting gamma rays from Tc-99m source measured between 134 and 481 cps/MBq and as good as 1.4–1.9 mm, respectively. Full article
Show Figures

Figure 1

15 pages, 1184 KiB  
Article
Video Laryngoscopes in Simulated Neonatal Intubation: Usability Study
by Jasmine Antoine, Kirsty McLeod, Luke Jardine, Helen G. Liley and Mia McLanders
Children 2025, 12(6), 723; https://doi.org/10.3390/children12060723 - 31 May 2025
Viewed by 414
Abstract
Background/Objectives: Neonatal intubation is a complex procedure, often associated with low first-pass success rates and a high incidence of complications. Video laryngoscopes provide several advantages, including higher success rates, especially for novice clinicians, a magnified airway view that can be shared with [...] Read more.
Background/Objectives: Neonatal intubation is a complex procedure, often associated with low first-pass success rates and a high incidence of complications. Video laryngoscopes provide several advantages, including higher success rates, especially for novice clinicians, a magnified airway view that can be shared with supervisors, and the ability to record still or video images for debriefing and education. However, video laryngoscope devices vary, raising the possibility of differences in usability. Methods: The study used mixed methodology, including observations, semi-structured interviews, think-aloud techniques, high-fidelity simulations, function tests, and questionnaires to assess usability, defined by the clinician satisfaction, efficacy, and efficiency of six video laryngoscope devices; (1) C-MAC® with Miller blade, (2) GlideScope® CoreTM with Miller blade, (3) GlideScope® CoreTM with hyperangle LoPro blade, (4) Koala® Vision Ultra with Miller blade, (5) Koala® Handheld with Miller blade, and (6) Parker Neonatal with Miller blade. Clinician satisfaction was determined by the System Usability Scale (SUS), National Aeronautics and Space Administration Task Load Index (NASA-TLX), and clinician preference. Device efficacy was determined by first-pass success, number of attempts, and overall success. Efficiency was assessed by time to successful intubation and function test completion rates. Results: Neonatal video laryngoscopes varied considerably in design, impacting usability. All devices were deemed suitable for neonatal intubation, with the Koala® Handheld, C-MAC®, and GlideScope® Core TM Miller demonstrating the highest usability. Conclusions: This simulation-based study highlights substantial variability in neonatal video laryngoscope usability, indicating the need for further research into usability in the clinical setting. Full article
(This article belongs to the Special Issue New Insights in Neonatal Resuscitation)
Show Figures

Figure 1

15 pages, 4666 KiB  
Article
Fusion of Medium- and High-Resolution Remote Images for the Detection of Stress Levels Associated with Citrus Sooty Mould
by Enrique Moltó, Marcela Pereira-Sandoval, Héctor Izquierdo-Sanz and Sergio Morell-Monzó
Agronomy 2025, 15(6), 1342; https://doi.org/10.3390/agronomy15061342 - 30 May 2025
Viewed by 385
Abstract
Citrus sooty mould caused by Capnodium spp. alters the quality of fruits on the tree and affects their productivity. Past laboratory and hand-held spectrometry tests have concluded that sooty mould exhibits a typical spectral response in the near-infrared spectrum region. For this reason, [...] Read more.
Citrus sooty mould caused by Capnodium spp. alters the quality of fruits on the tree and affects their productivity. Past laboratory and hand-held spectrometry tests have concluded that sooty mould exhibits a typical spectral response in the near-infrared spectrum region. For this reason, this study aims at developing an automatic method for remote sensing of this disease, combining 10 m spatial resolution Sentinel-2 satellite images and 0.25 m spatial resolution orthophotos to identify sooty mould infestation levels in small orchards, common in Mediterranean conditions. Citrus orchards of the Comunitat Valenciana region (Spain) underwent field inspection in 2022 during two months of minimum (August) and maximum (October) infestation. The inspectors categorised their observations according to three levels of infestation in three representative positions of each orchard. Two synthetic images condensing the monthly information were generated for both periods. A filtering algorithm was created, based on high-resolution images, to select informative pixels in the lower resolution images. The data were used to evaluate the performance of a Random Forest classifier in predicting intensity levels through cross-validation. Combining the information from medium- and high-resolution images improved the overall accuracy from 0.75 to 0.80, with mean producer’s accuracies of above 0.65 and mean user’s accuracies of above 0.78. Bowley–Yule skewness coefficients were +0.50 for the overall accuracy and +0.28 for the kappa index. Full article
Show Figures

Figure 1

19 pages, 3395 KiB  
Article
End-to-End Online Video Stitching and Stabilization Method Based on Unsupervised Deep Learning
by Pengyuan Wang, Pinle Qin, Rui Chai, Jianchao Zeng, Pengcheng Zhao, Zuojun Chen and Bingjie Han
Appl. Sci. 2025, 15(11), 5987; https://doi.org/10.3390/app15115987 - 26 May 2025
Viewed by 682
Abstract
The limited field of view, cumulative inter-frame jitter, and dynamic parallax interference in handheld video stitching often lead to misalignment and distortion. In this paper, we propose an end-to-end, unsupervised deep-learning framework that jointly performs real-time video stabilization and stitching. First, collaborative optimization [...] Read more.
The limited field of view, cumulative inter-frame jitter, and dynamic parallax interference in handheld video stitching often lead to misalignment and distortion. In this paper, we propose an end-to-end, unsupervised deep-learning framework that jointly performs real-time video stabilization and stitching. First, collaborative optimization architecture allows the stabilization and stitching modules to share parameters and propagate errors through a fully differentiable network, ensuring consistent image alignment. Second, a Markov trajectory smoothing strategy in relative coordinates models inter-frame motion as incremental relationships, effectively reducing cumulative errors. Third, a dynamic attention mask generates spatiotemporal weight maps based on foreground motion prediction, suppressing misalignment caused by dynamic objects. Experimental evaluation on diverse handheld sequences shows that our method achieves higher stitching quality, lower geometric distortion rates, and improved video stability compared to state-of-the-art baselines, while maintaining real-time processing capabilities. Ablation studies validate that relative trajectory modeling substantially mitigates long-term jitter and that the dynamic attention mask enhances stitching accuracy in dynamic scenes. These results demonstrate that the proposed framework provides a robust solution for high-quality, real-time handheld video stitching. Full article
(This article belongs to the Collection Trends and Prospects in Multimedia)
Show Figures

Figure 1

15 pages, 1903 KiB  
Article
Handheld Ground-Penetrating Radar Antenna Position Estimation Using Factor Graphs
by Paweł Słowak, Tomasz Kraszewski and Piotr Kaniewski
Sensors 2025, 25(11), 3275; https://doi.org/10.3390/s25113275 - 23 May 2025
Viewed by 447
Abstract
Accurate localization of handheld ground-penetrating radar (HH-GPR) systems is critical for high-quality subsurface imaging and precise geospatial mapping of detected buried objects. In our previous works, we demonstrated that a UWB positioning system with an extended Kalman filter (EKF) employing a proprietary pendulum [...] Read more.
Accurate localization of handheld ground-penetrating radar (HH-GPR) systems is critical for high-quality subsurface imaging and precise geospatial mapping of detected buried objects. In our previous works, we demonstrated that a UWB positioning system with an extended Kalman filter (EKF) employing a proprietary pendulum (PND) dynamics model yielded highly accurate results. Building on that foundation, we present a factor-graph-based estimation algorithm to further enhance the accuracy of HH-GPR antenna trajectory estimation. The system was modeled under realistic conditions, and both the EKF and various factor-graph algorithms were implemented. Comparative evaluation indicates that the factor-graph approach achieves an improvement in localization accuracy from over 30 to almost 50 percent compared to the EKF PND. The sparse matrix representation inherent in the factor graph enabled an efficient iterative solution of the underlying linearized system. This enhanced positioning accuracy is expected to facilitate the generation of clearer, more distinct underground images, thereby supporting the potential for more reliable identification and classification of buried objects and infrastructure. Full article
(This article belongs to the Special Issue Indoor Wi-Fi Positioning: Techniques and Systems—2nd Edition)
Show Figures

Figure 1

17 pages, 2467 KiB  
Article
Quantitative Ultrasound Texture Analysis of Breast Tumors: A Comparison of a Cart-Based and a Wireless Ultrasound Scanner
by David Alberico, Lakshmanan Sannachi, Maria Lourdes Anzola Pena, Joyce Yip, Laurentius O. Osapoetra, Schontal Halstead, Daniel DiCenzo, Sonal Gandhi, Frances Wright, Michael Oelze and Gregory J. Czarnota
J. Imaging 2025, 11(5), 146; https://doi.org/10.3390/jimaging11050146 - 6 May 2025
Viewed by 765
Abstract
Previous work has demonstrated quantitative ultrasound (QUS) analysis techniques for extracting features and texture features from ultrasound radiofrequency data which can be used to distinguish between benign and malignant breast masses. It is desirable that there be good agreement between estimates of such [...] Read more.
Previous work has demonstrated quantitative ultrasound (QUS) analysis techniques for extracting features and texture features from ultrasound radiofrequency data which can be used to distinguish between benign and malignant breast masses. It is desirable that there be good agreement between estimates of such features acquired using different ultrasound devices. Handheld ultrasound imaging systems are of particular interest as they are compact, relatively inexpensive, and highly portable. This study investigated the agreement between QUS parameters and texture features estimated from clinical ultrasound images of breast tumors acquired using two different ultrasound scanners: a traditional cart-based system and a wireless handheld ultrasound system. The 28 patients who participated were divided into two groups (benign and malignant). The reference phantom technique was used to produce functional estimates of the normalized power spectra and backscatter coefficient for each image. Root mean square differences of feature estimates were calculated for each cohort to quantify the level of feature variation attributable to tissue heterogeneity and differences in system imaging parameters. Cross-system statistical testing using the Mann–Whitney U test was performed on benign and malignant patient cohorts to assess the level of feature estimate agreement between systems, and the Bland–Altman method was employed to assess feature sets for systematic bias introduced by differences in imaging method. The range of p-values was 1.03 × 10−4 to 0.827 for the benign cohort and 3.03 × 10−10 to 0.958 for the malignant cohort. For both cohorts, all five of the primary QUS features (MBF, SS, SI, ASD, AAC) were found to be in agreement at the 5% confidence level. A total of 13 of the 20 QUS texture features (65%) were determined to exhibit statistically significant differences in the sample medians of estimates between systems at the 5% confidence level, with the remaining 7 texture features being in agreement. The results showed a comparable magnitude of feature variation between tissue heterogeneity and system effects, as well as a moderate level of statistical agreement between feature sets. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

18 pages, 9572 KiB  
Article
TGA-GS: Thermal Geometrically Accurate Gaussian Splatting
by Chen Zou, Qingsen Ma, Jia Wang, Rongfeng Lu, Ming Lu and Zhaowei Qu
Appl. Sci. 2025, 15(9), 4666; https://doi.org/10.3390/app15094666 - 23 Apr 2025
Viewed by 873
Abstract
Novel view synthesis and 3D reconstruction have been extensively studied. Three-dimensional Gaussian Splatting (3DGS) has gained popularity due to its rapid training and real-time rendering capabilities. However, RGB imaging is highly dependent on ideal illumination conditions. In low-light situations such as at night [...] Read more.
Novel view synthesis and 3D reconstruction have been extensively studied. Three-dimensional Gaussian Splatting (3DGS) has gained popularity due to its rapid training and real-time rendering capabilities. However, RGB imaging is highly dependent on ideal illumination conditions. In low-light situations such as at night or in the presence of occlusions, RGB images often suffer from blurred contours or even complete failure in imaging, which severely restricts the application of 3DGS in such scenarios. Thermal imaging technology, on the other hand, serves as an effective complement. Thermal images are solely influenced by heat sources and are immune to illumination conditions. This unique property enables them to clearly identify the contour information of objects in low-light environments. Nevertheless, thermal images exhibit significant limitations in presenting texture details due to their sensitivity to temperature variations rather than surface texture features. To capitalize on the strengths of both, we propose thermal geometrically accurate Gaussian Splatting (TGA-GS), a novel Gaussian Splatting model. TGA-GS is designed to leverage RGB and thermal information to generate high-quality meshes in low-light conditions. Meanwhile, given low-resolution thermal images and low-light RGB images as inputs, our method can generate high-resolution thermal and RGB images from novel viewpoints. Moreover, we also provide a real thermal imaging dataset captured with a handheld thermal infrared camera. This not only enriches the information content of the images but also provides a more reliable data basis for subsequent computer vision tasks in low-light scenarios. Full article
Show Figures

Figure 1

14 pages, 2939 KiB  
Article
Innovative Discrete Multi-Wavelength Near-Infrared Spectroscopic (DMW-NIRS) Imaging for Rapid Breast Lesion Differentiation: Feasibility Study
by Jiyoung Yoon, Kyunghwa Han, Min Jung Kim, Heesun Hong, Eunice S. Han and Sung-Ho Han
Diagnostics 2025, 15(9), 1067; https://doi.org/10.3390/diagnostics15091067 - 23 Apr 2025
Viewed by 528
Abstract
Background/Objectives: This study evaluated the role of a discrete multi-wavelength near-infrared spectroscopic (DMW-NIRS) imaging device for rapid breast lesion differentiation. Methods: A total of 62 women (mean age, 49.9 years) with ultrasound (US)-guided biopsy-confirmed breast lesions (37 malignant, 25 benign) were [...] Read more.
Background/Objectives: This study evaluated the role of a discrete multi-wavelength near-infrared spectroscopic (DMW-NIRS) imaging device for rapid breast lesion differentiation. Methods: A total of 62 women (mean age, 49.9 years) with ultrasound (US)-guided biopsy-confirmed breast lesions (37 malignant, 25 benign) were included. A handheld probe equipped with five pairs of light-emitting diodes (LEDs) and photodiodes (PDs) measured lesion-to-normal tissue (L/N) ratios of four chromophores, THC (Total Hemoglobin Concentration), StO2, and the Tissue Optical Index (TOI: log10(THC × Water/Lipid)). Lesions were localized using US. Diagnostic performance was assessed for each L/N ratio, with subgroup analysis for BI-RADS 4A lesions. Two adaptive BI-RADS models were developed: Model 1 used TOIL/N thresholds (Youden index), while Model 2 incorporated radiologists’ reassessments of US findings integrated with DMW-NIRS results. These models were compared to the initial BI-RADS assessments, conducted by breast-dedicated radiologists. Results: All L/N ratios significantly differentiated malignant from benign lesions (p < 0.05), with TOIL/N achieving the highest AUC-ROC (0.901; 95% CI: 0.825–0.976). In BI-RADS 4A lesions, all L/N ratios except Lipid significantly differentiated malignancy (p < 0.05), with TOIL/N achieving the highest AUC-ROC (0.902; 95% CI: 0.788–1.000). Model 1 and Model 2 showed superior diagnostic performance (AUC-ROCs: 0.962 and 0.922, respectively), significantly outperforming initial BI-RADS assessments (prospective AUC-ROC: 0.862; retrospective AUC-ROC: 0.866; p < 0.05). Conclusions: Integrating DMW-NIRS findings with US evaluations enhances diagnostic accuracy, particularly for BI-RADS 4A lesions. This novel device offers a rapid, non-invasive, and efficient method to reduce unnecessary biopsies and improve breast cancer diagnostics. Further validation in larger cohorts is warranted. Full article
(This article belongs to the Section Medical Imaging and Theranostics)
Show Figures

Figure 1

Back to TopTop