Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (133)

Search Parameters:
Keywords = camera miniaturization

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 4963 KB  
Article
A Hybrid Deep Learning and Optical Flow Framework for Monocular Capsule Endoscopy Localization
by İrem Yakar, Ramazan Alper Kuçak, Serdar Bilgi, Onur Ferhanoglu and Tahir Cetin Akinci
Electronics 2025, 14(18), 3722; https://doi.org/10.3390/electronics14183722 - 19 Sep 2025
Viewed by 556
Abstract
Pose estimation and localization within the gastrointestinal tract, particularly the small bowel, are crucial for invasive medical procedures. However, the task is challenging due to the complex anatomy, homogeneous textures, and limited distinguishable features. This study proposes a hybrid deep learning (DL) method [...] Read more.
Pose estimation and localization within the gastrointestinal tract, particularly the small bowel, are crucial for invasive medical procedures. However, the task is challenging due to the complex anatomy, homogeneous textures, and limited distinguishable features. This study proposes a hybrid deep learning (DL) method combining Convolutional Neural Network (CNN)-based pose estimation and optical flow to address these challenges in a simulated small bowel environment. Initial pose estimation was used to assess the performance of simultaneous localization and mapping (SLAM) in such complex settings, using a custom endoscope prototype with a laser, micromotor, and miniaturized camera. The results showed limited feature detection and unreliable matches due to repetitive textures. To improve this issue, a hybrid CNN-based approach enhanced with Farneback optical flow was applied. Using consecutive images, three models were compared: Hybrid ResNet-50 with Farneback optical flow, ResNet-50, and NASNetLarge pretrained on ImageNet. The analysis showed that the hybrid model outperformed both ResNet-50 (0.39 cm) and NASNetLarge (1.46 cm), achieving the lowest RMSE of 0.03 cm, with feature-based SLAM failing to provide reliable results. The hybrid model also gained a competitive inference speed of 241.84 ms per frame, outperforming ResNet-50 (316.57 ms) and NASNetLarge (529.66 ms). To assess the impact of the optical flow choice, Lucas–Kanade was also implemented within the same framework and compared with the Farneback-based results. These results demonstrate that combining optical flow with ResNet-50 enhances pose estimation accuracy and stability, especially in textureless environments where traditional methods struggle. The proposed method offers a robust, real-time alternative to SLAM, with potential applications in clinical capsule endoscopy. The results are positioned as a proof-of-concept that highlights the feasibility and clinical potential of the proposed framework. Future work will extend the framework to real patient data and optimize for real-time hardware. Full article
Show Figures

Figure 1

19 pages, 16857 KB  
Article
Mechanical Response Mechanism and Acoustic Emission Evolution Characteristics of Deep Porous Sandstone
by Zihao Li, Guangming Zhao, Xin Xu, Chongyan Liu, Wensong Xu and Shunjie Huang
Infrastructures 2025, 10(9), 236; https://doi.org/10.3390/infrastructures10090236 - 9 Sep 2025
Viewed by 385
Abstract
To investigate the failure mechanisms of surrounding rock in deep mine tunnels and its spatio-temporal evolution patterns, a true triaxial disturbance unloading rock testing system, the acoustic emission (AE) system, and the miniature camera monitoring system were employed to conduct true triaxial graded [...] Read more.
To investigate the failure mechanisms of surrounding rock in deep mine tunnels and its spatio-temporal evolution patterns, a true triaxial disturbance unloading rock testing system, the acoustic emission (AE) system, and the miniature camera monitoring system were employed to conduct true triaxial graded loading tests on sandstone containing circular holes at burial depths of 800 m, 1000 m, 1200 m, 1400 m, and 1600 m. The study investigated the patterns of mechanical properties and failure characteristics of porous sandstone at different burial depths. The results showed that the peak strength of the specimens increased quadratically with increasing burial depth; the failure process of porous sandstone could be divided into four stages: the calm period, the particle ejection period, the stable failure period, and the complete collapse period; as burial depth increases, the failure mode transitions from a composite tensile–shear crack type to a shear crack-dominated type, with the ratio of shear cracks to tensile cracks exhibiting quadratic growth and reduction, respectively; the particle ejection stage is characterised by low-frequency, low-amplitude signals, corresponding to the microcrack initiation stage, while the stable failure stage exhibits a sharp increase in low-frequency, high-amplitude signals, reflecting macrocrack propagation characteristics, with the spatial evolution of their locations ultimately forming a penetrating oblique shear failure zone; and peak stress analysis indicates that as burial depth increases, peak stress during the particle ejection phase first increases and then decreases, while peak stress during the stable failure phase first decreases and then stabilises. The duration of the pre-instability calm phase shows a significant negative correlation with burial depth. The research findings provide a theoretical basis for controlling tunnel rock mass stability and disaster warning. Full article
Show Figures

Figure 1

32 pages, 1435 KB  
Review
Smart Safety Helmets with Integrated Vision Systems for Industrial Infrastructure Inspection: A Comprehensive Review of VSLAM-Enabled Technologies
by Emmanuel A. Merchán-Cruz, Samuel Moveh, Oleksandr Pasha, Reinis Tocelovskis, Alexander Grakovski, Alexander Krainyukov, Nikita Ostrovenecs, Ivans Gercevs and Vladimirs Petrovs
Sensors 2025, 25(15), 4834; https://doi.org/10.3390/s25154834 - 6 Aug 2025
Viewed by 2798
Abstract
Smart safety helmets equipped with vision systems are emerging as powerful tools for industrial infrastructure inspection. This paper presents a comprehensive state-of-the-art review of such VSLAM-enabled (Visual Simultaneous Localization and Mapping) helmets. We surveyed the evolution from basic helmet cameras to intelligent, sensor-fused [...] Read more.
Smart safety helmets equipped with vision systems are emerging as powerful tools for industrial infrastructure inspection. This paper presents a comprehensive state-of-the-art review of such VSLAM-enabled (Visual Simultaneous Localization and Mapping) helmets. We surveyed the evolution from basic helmet cameras to intelligent, sensor-fused inspection platforms, highlighting how modern helmets leverage real-time visual SLAM algorithms to map environments and assist inspectors. A systematic literature search was conducted targeting high-impact journals, patents, and industry reports. We classify helmet-integrated camera systems into monocular, stereo, and omnidirectional types and compare their capabilities for infrastructure inspection. We examine core VSLAM algorithms (feature-based, direct, hybrid, and deep-learning-enhanced) and discuss their adaptation to wearable platforms. Multi-sensor fusion approaches integrating inertial, LiDAR, and GNSS data are reviewed, along with edge/cloud processing architectures enabling real-time performance. This paper compiles numerous industrial use cases, from bridges and tunnels to plants and power facilities, demonstrating significant improvements in inspection efficiency, data quality, and worker safety. Key challenges are analyzed, including technical hurdles (battery life, processing limits, and harsh environments), human factors (ergonomics, training, and cognitive load), and regulatory issues (safety certification and data privacy). We also identify emerging trends, such as semantic SLAM, AI-driven defect recognition, hardware miniaturization, and collaborative multi-helmet systems. This review finds that VSLAM-equipped smart helmets offer a transformative approach to infrastructure inspection, enabling real-time mapping, augmented awareness, and safer workflows. We conclude by highlighting current research gaps, notably in standardizing systems and integrating with asset management, and provide recommendations for industry adoption and future research directions. Full article
Show Figures

Figure 1

13 pages, 2559 KB  
Article
An AI Approach to Markerless Augmented Reality in Surgical Robots
by Abhishek Shankar, Luay Jawad and Abhilash Pandya
Robotics 2025, 14(7), 99; https://doi.org/10.3390/robotics14070099 - 19 Jul 2025
Viewed by 1115
Abstract
This paper examines the integration of markerless augmented reality (AR) within the da Vinci Surgical Robot, utilizing artificial intelligence (AI) for improved precision. The main challenge in creating AR for these systems is the small size (5 mm diameter) of the cameras used. [...] Read more.
This paper examines the integration of markerless augmented reality (AR) within the da Vinci Surgical Robot, utilizing artificial intelligence (AI) for improved precision. The main challenge in creating AR for these systems is the small size (5 mm diameter) of the cameras used. Traditional camera-calibration approaches produce significant errors when used for miniature cameras. Further, the use of external markers can be obstructive and inaccurate in dynamic surgical environments. The study focuses on overcoming these limitations of traditional AR methods by employing advanced neural networks for camera calibration and real-time image processing. We demonstrate the use of a dense neural network to reduce the total projection error by directly learning the mapping of a 3D point to a 2D image plane. The results show a median error of 7 pixels (1.4 mm) when using a neural network, as compared to an error of 50 pixels (10 mm) when using a more traditional approach involving camera calibration and robot kinematics. This approach not only enhances the accuracy of AR for surgical procedures but also offers a more seamless integration with existing robotic platforms. These research findings underscore the potential of AI in revolutionizing AR applications in medical robotics and other teleoperated systems, promising efficient and safer interventions. Full article
(This article belongs to the Section Medical Robotics and Service Robotics)
Show Figures

Figure 1

20 pages, 7588 KB  
Article
Dual-Purpose Star Tracker and Space Debris Detector: Miniature Instrument for Small Satellites
by Beltran N. Arribas, João G. Maia, João P. Castanheira, Joel Filho, Rui Melicio, Hugo Onderwater, Paulo Gordo, R. Policarpo Duarte and André R. R. Silva
J. Sens. Actuator Netw. 2025, 14(4), 75; https://doi.org/10.3390/jsan14040075 - 16 Jul 2025
Viewed by 1490
Abstract
This paper presents the conception, design and real miniature instrument implementation of a dual-purpose sensor for small satellites that can act as a star tracker and space debris detector. In the previous research work, the authors conceived, designed and implemented a breadboard consisting [...] Read more.
This paper presents the conception, design and real miniature instrument implementation of a dual-purpose sensor for small satellites that can act as a star tracker and space debris detector. In the previous research work, the authors conceived, designed and implemented a breadboard consisting of a computer laptop, a camera interface and camera controller, an image sensor, an optics system, a temperature sensor and a temperature controller. It showed that the instrument was feasible. In this paper, a new real star tracker miniature instrument is designed, physically realized and tested. The implementation follows a New Space approach; it is made with Commercial Off-the-Shelf (COTS) components with space heritage. The instrument’s development, implementation and testing are presented. Full article
Show Figures

Figure 1

29 pages, 10932 KB  
Article
On-Orbit Performance and Hyperspectral Data Processing of the TIRSAT CubeSat Mission
by Yoshihide Aoyanagi, Tomofumi Doi, Hajime Arai, Yoshihisa Shimada, Masakazu Yasuda, Takahiro Yamazaki and Hiroshi Sawazaki
Remote Sens. 2025, 17(11), 1903; https://doi.org/10.3390/rs17111903 - 30 May 2025
Cited by 1 | Viewed by 1358
Abstract
A miniaturized hyperspectral camera, developed by integrating a linear variable band-pass filter (LVBPF) with an image sensor, was installed on the TIRSAT 3U CubeSat, launched on 17 February 2024 by Japan’s H3 launch vehicle. The satellite and its onboard hyperspectral camera conducted on-orbit [...] Read more.
A miniaturized hyperspectral camera, developed by integrating a linear variable band-pass filter (LVBPF) with an image sensor, was installed on the TIRSAT 3U CubeSat, launched on 17 February 2024 by Japan’s H3 launch vehicle. The satellite and its onboard hyperspectral camera conducted on-orbit experiments and successfully acquired hyperspectral data from multiple locations. The required attitude control for the hyperspectral mission was also achieved. CubeSat-based hyperspectral missions often face challenges in image alignment due to factors such as parallax, distortion, and limited attitude stability. This study presents solutions to these issues, supported by actual observational hyperspectral data. To verify the consistency of the hyperspectral data acquired by TIRSAT and processed using the proposed method, a validation analysis was conducted. Full article
(This article belongs to the Special Issue Advances in CubeSats for Earth Observation)
Show Figures

Figure 1

19 pages, 6054 KB  
Article
Advancements in Aircraft Engine Inspection: A MEMS-Based 3D Measuring Borescope
by Jonathan Gail, Felix Kruse, Shanshan Gu-Stoppel, Ole Schmedemann, Günther Leder, Wolfgang Reinert, Lena Wysocki, Nils Burmeister, Lars Ratzmann, Thorsten Giese, Patrick Schütt, Gundula Piechotta and Thorsten Schüppstuhl
Aerospace 2025, 12(5), 419; https://doi.org/10.3390/aerospace12050419 - 8 May 2025
Viewed by 971
Abstract
Aircraft engines are regularly inspected with borescopes to detect faults at an early stage and maintain airworthiness. A critical part of this inspection process is accurately measuring any detected damage to determine whether it exceeds allowable limits. Current state-of-the-art borescope measurement techniques—primarily stereo [...] Read more.
Aircraft engines are regularly inspected with borescopes to detect faults at an early stage and maintain airworthiness. A critical part of this inspection process is accurately measuring any detected damage to determine whether it exceeds allowable limits. Current state-of-the-art borescope measurement techniques—primarily stereo camera systems and pattern projection—face significant challenges when engines lack sufficient surface features or when illumination is inadequate for reliable stereo matching. MEMS-based 3D scanners address these issues by focusing laser light onto a small spot, reducing dependency on surface texture and improving illumination. However, miniaturized MEMS-based scanner borescopes that can pass through standard engine inspection ports are not yet available. This work examines the essential steps to downsize MEMS 3D scanners for direct integration into borescope inspections, thereby enhancing the accuracy and reliability of aircraft engine fault detection. Full article
Show Figures

Figure 1

14 pages, 4647 KB  
Article
Rotary Panoramic and Full-Depth-of-Field Imaging System for Pipeline Inspection
by Qiang Xing, Xueqin Zhao, Kun Song, Jiawen Jiang, Xinhao Wang, Yuanyuan Huang and Haodong Wei
Sensors 2025, 25(9), 2860; https://doi.org/10.3390/s25092860 - 30 Apr 2025
Viewed by 726
Abstract
To address the adaptability and insufficient imaging quality of conventional in-pipe imaging techniques for irregular pipelines or unstructured scenes, this study proposes a novel radial rotating full-depth-of-field focusing imaging system designed to adapt to the structural complexities of irregular pipelines, which can effectively [...] Read more.
To address the adaptability and insufficient imaging quality of conventional in-pipe imaging techniques for irregular pipelines or unstructured scenes, this study proposes a novel radial rotating full-depth-of-field focusing imaging system designed to adapt to the structural complexities of irregular pipelines, which can effectively acquire tiny details with a depth of 300–960 mm inside the pipeline. Firstly, a fast full-depth-of-field imaging method driven by depth features is proposed. Secondly, a full-depth rotating imaging apparatus is developed, incorporating a zoom camera, a miniature servo rotation mechanism, and a control system, enabling 360° multi-view angles and full-depth-of-field focusing imaging. Finally, full-depth-of-field focusing imaging experiments are carried out for pipelines with depth-varying characteristics. The results demonstrate that the imaging device can acquire depth data of the pipeline interior and rapidly obtain high-definition characterization sequence images of the inner pipeline wall. In the depth-of-field segmentation with multiple view angles, the clarity of the fused image is improved by 75.3% relative to a single frame, and the SNR and PSNR reach 6.9 dB and 26.3 dB, respectively. Compared to existing pipeline closed-circuit television (CCTV) and other in-pipeline imaging techniques, the developed rotating imaging system exhibits high integration, faster imaging capabilities, and adaptive capacity. This system provides an adaptive imaging solution for detecting defects on the inner surfaces of irregular pipelines, offering significant potential for practical applications in pipeline inspection and maintenance. Full article
(This article belongs to the Special Issue Sensors in 2025)
Show Figures

Figure 1

11 pages, 3253 KB  
Article
Development of a Smartphone-Linked Immunosensing System for Oxytocin Determination
by Miku Sarubo, Yoka Suzuki, Yuka Numazaki and Hiroyuki Kudo
Biosensors 2025, 15(4), 261; https://doi.org/10.3390/bios15040261 - 18 Apr 2025
Viewed by 732
Abstract
We report an optical immunosensing system for oxytocin (OXT) based on image analysis of color reactions in an enzyme-linked immunosorbent assay (ELISA). We employed a miniaturized optical immunosensing unit that was functionally connected to an LED and a smartphone camera. Our system measures [...] Read more.
We report an optical immunosensing system for oxytocin (OXT) based on image analysis of color reactions in an enzyme-linked immunosorbent assay (ELISA). We employed a miniaturized optical immunosensing unit that was functionally connected to an LED and a smartphone camera. Our system measures OXT levels using a metric called the RGBscore, which is derived from the red, green, and blue (RGB) information in the captured images. By calculating the RGBscore regressively using the brute-force method, this approach can be applied to smartphones with various CMOS image sensors and firmware. The lower detection limit was determined to be 5.26 pg/mL, and the measurement results showed a higher correlation (r = 0.972) with those obtained from conventional ELISA. These results suggest the potential for its application in a simplified health management system for individuals. Full article
(This article belongs to the Special Issue Biosensors Based on Microfluidic Devices—2nd Edition)
Show Figures

Figure 1

26 pages, 22584 KB  
Article
Expansion of Output Spatial Extent in the Wavenumber Domain Algorithms for Near-Field 3-D MIMO Radar Imaging
by Yifan Gong, Limin Zhai, Yan Jia, Yongqing Liu and Xiangkun Zhang
Remote Sens. 2025, 17(7), 1287; https://doi.org/10.3390/rs17071287 - 4 Apr 2025
Viewed by 691
Abstract
Microwave camera provides 3-D high-resolution radar images at video frame rates, enabling the capture of dynamic target features. Multiple-input–multiple-output (MIMO) array-based 3-D radar imaging system requires fewer antennas, which effectively reduces hardware costs. Due to the limited computational resources of the miniaturized MIMO [...] Read more.
Microwave camera provides 3-D high-resolution radar images at video frame rates, enabling the capture of dynamic target features. Multiple-input–multiple-output (MIMO) array-based 3-D radar imaging system requires fewer antennas, which effectively reduces hardware costs. Due to the limited computational resources of the miniaturized MIMO microwave camera, real-time processing of a large amount of 3-D echo data requires an imaging algorithm that has both real-time performance and large output spatial extent. This paper presents the limited output spatial extent and spatial aliasing in existing MIMO wavenumber domain algorithms through theoretical derivation and simulation. To suppress aliasing while expanding the output spatial extent, an optimization approach for the wavenumber domain algorithms is proposed. The improved wavenumber domain algorithms divide the target area into multiple sub-blocks, and a broader range of imaging results is obtained through independent imaging of the sub-blocks and a spatial aliasing suppression filter. Simulation results show that the improved wavenumber domain algorithms effectively suppress the aliasing energy of each sub-block while maintaining the advantage of low time complexity. Expansion of output spatial extent in existing MIMO wavenumber domain algorithms is achieved. Full article
(This article belongs to the Special Issue Array and Signal Processing for Radar)
Show Figures

Figure 1

9 pages, 2881 KB  
Article
Compact Near-Infrared Imaging Device Based on a Large-Aperture All-Si Metalens
by Zhixi Li, Wei Liu, Yubing Zhang, Feng Tang, Liming Yang and Xin Ye
Nanomaterials 2025, 15(6), 453; https://doi.org/10.3390/nano15060453 - 17 Mar 2025
Cited by 1 | Viewed by 1065
Abstract
Near-infrared imaging devices are extensively used in medical diagnosis, night vision, and security monitoring. However, existing traditional imaging devices rely on a bunch of refracting lenses, resulting in large, bulky imaging systems that restrict their broader utility. The emergence of flat meta-optics offers [...] Read more.
Near-infrared imaging devices are extensively used in medical diagnosis, night vision, and security monitoring. However, existing traditional imaging devices rely on a bunch of refracting lenses, resulting in large, bulky imaging systems that restrict their broader utility. The emergence of flat meta-optics offers a potential solution to these limitations, but existing research on compact integrated devices based on near-infrared meta-optics is insufficient. In this study, we propose an integrated NIR imaging camera that utilizes large-size metalens with a silicon nanostructure with high transmission efficiency. Through the detection of target and animal and plant tissue samples, the ability to capture biological structures and their imaging performance was verified. Through further integration of the NIR imaging device, the device significantly reduces the size and weight of the system and optimizes the aperture to achieve excellent image brightness and contrast. Additionally, venous imaging of human skin shows the potential of the device for biomedical applications. This research has an important role in promoting the miniaturization and lightweight of near-infrared optical imaging devices, which is expected to be applied to medical testing and night vision imaging. Full article
(This article belongs to the Special Issue The Interaction of Electron Phenomena on the Mesoscopic Scale)
Show Figures

Figure 1

25 pages, 9497 KB  
Article
Concealed Weapon Detection Using Thermal Cameras
by Juan D. Muñoz, Jesus Ruiz-Santaquiteria, Oscar Deniz and Gloria Bueno
J. Imaging 2025, 11(3), 72; https://doi.org/10.3390/jimaging11030072 - 26 Feb 2025
Cited by 4 | Viewed by 4864
Abstract
In an era where security concerns are ever-increasing, the need for advanced technology to detect visible and concealed weapons has become critical. This paper introduces a novel two-stage method for concealed handgun detection, leveraging thermal imaging and deep learning, offering a potential real-world [...] Read more.
In an era where security concerns are ever-increasing, the need for advanced technology to detect visible and concealed weapons has become critical. This paper introduces a novel two-stage method for concealed handgun detection, leveraging thermal imaging and deep learning, offering a potential real-world solution for law enforcement and surveillance applications. The approach first detects potential firearms at the frame level and subsequently verifies their association with a detected person, significantly reducing false positives and false negatives. Alarms are triggered only under specific conditions to ensure accurate and reliable detection, with precautionary alerts raised if no person is detected but a firearm is identified. Key contributions include a lightweight algorithm optimized for low-end embedded devices, making it suitable for wearable and mobile applications, and the creation of a tailored thermal dataset for controlled concealment scenarios. The system is implemented on a chest-worn Android smartphone with a miniature thermal camera, enabling hands-free operation. Experimental results validate the method’s effectiveness, achieving an mAP@50-95 of 64.52% on our dataset, improving state-of-the-art methods. By reducing false negatives and improving reliability, this study offers a scalable, practical solution for security applications. Full article
(This article belongs to the Special Issue Object Detection in Video Surveillance Systems)
Show Figures

Figure 1

29 pages, 4988 KB  
Article
Interaction Glove for 3-D Virtual Environments Based on an RGB-D Camera and Magnetic, Angular Rate, and Gravity Micro-Electromechanical System Sensors
by Pontakorn Sonchan, Neeranut Ratchatanantakit, Nonnarit O-Larnnithipong, Malek Adjouadi and Armando Barreto
Information 2025, 16(2), 127; https://doi.org/10.3390/info16020127 - 9 Feb 2025
Cited by 1 | Viewed by 3694
Abstract
This paper presents the theoretical foundation, practical implementation, and empirical evaluation of a glove for interaction with 3-D virtual environments. At the dawn of the “Spatial Computing Era”, where users continuously interact with 3-D Virtual and Augmented Reality environments, the need for a [...] Read more.
This paper presents the theoretical foundation, practical implementation, and empirical evaluation of a glove for interaction with 3-D virtual environments. At the dawn of the “Spatial Computing Era”, where users continuously interact with 3-D Virtual and Augmented Reality environments, the need for a practical and intuitive interaction system that can efficiently engage 3-D elements is becoming pressing. Over the last few decades, there have been attempts to provide such an interaction mechanism using a glove. However, glove systems are currently not in widespread use due to their high cost and, we propose, due to their inability to sustain high levels of performance under certain situations. Performance deterioration has been observed due to the distortion of the local magnetic field caused by ordinary ferromagnetic objects present near the glove’s operating space. There are several areas where reliable hand-tracking gloves could provide a next generation of improved solutions, such as American Sign Language training and automatic translation to text and training and evaluation for activities that require high motor skills in the hands (e.g., playing some musical instruments, training of surgeons, etc.). While the use of a hand-tracking glove toward these goals seems intuitive, some of the currently available glove systems may not meet the accuracy and reliability levels required for those use cases. This paper describes our concept of an interaction glove instrumented with miniature magnetic, angular rate, and gravity (MARG) sensors and aided by a single camera. The camera used is an off-the-shelf red, green, and blue–depth (RGB-D) camera. We describe a proof-of-concept implementation of the system using our custom “GMVDK” orientation estimation algorithm. This paper also describes the glove’s empirical evaluation with human-subject performance tests. The results show that the prototype glove, using the GMVDK algorithm, is able to operate without performance losses, even in magnetically distorted environments. Full article
(This article belongs to the Special Issue Multimodal Human-Computer Interaction)
Show Figures

Figure 1

25 pages, 9045 KB  
Article
Deep Learning-Enhanced Portable Chemiluminescence Biosensor: 3D-Printed, Smartphone-Integrated Platform for Glucose Detection
by Chirag M. Singhal, Vani Kaushik, Abhijeet Awasthi, Jitendra B. Zalke, Sangeeta Palekar, Prakash Rewatkar, Sanjeet Kumar Srivastava, Madhusudan B. Kulkarni and Manish L. Bhaiyya
Bioengineering 2025, 12(2), 119; https://doi.org/10.3390/bioengineering12020119 - 27 Jan 2025
Cited by 15 | Viewed by 2775
Abstract
A novel, portable chemiluminescence (CL) sensing platform powered by deep learning and smartphone integration has been developed for cost-effective and selective glucose detection. This platform features low-cost, wax-printed micro-pads (WPµ-pads) on paper-based substrates used to construct a miniaturized CL sensor. A 3D-printed black [...] Read more.
A novel, portable chemiluminescence (CL) sensing platform powered by deep learning and smartphone integration has been developed for cost-effective and selective glucose detection. This platform features low-cost, wax-printed micro-pads (WPµ-pads) on paper-based substrates used to construct a miniaturized CL sensor. A 3D-printed black box serves as a compact WPµ-pad sensing chamber, replacing traditional bulky equipment, such as charge coupled device (CCD) cameras and optical sensors. Smartphone integration enables a seamless and user-friendly diagnostic experience, making this platform highly suitable for point-of-care (PoC) applications. Deep learning models significantly enhance the platform’s performance, offering superior accuracy and efficiency in CL image analysis. A dataset of 600 experimental CL images was utilized, out of which 80% were used for model training, with 20% of the images reserved for testing. Comparative analysis was conducted using multiple deep learning models, including Random Forest, the Support Vector Machine (SVM), InceptionV3, VGG16, and ResNet-50, to identify the optimal architecture for accurate glucose detection. The CL sensor demonstrates a linear detection range of 10–1000 µM, with a low detection limit of 8.68 µM. Extensive evaluations confirmed its stability, repeatability, and reliability under real-world conditions. This deep learning-powered platform not only improves the accuracy of analyte detection, but also democratizes access to advanced diagnostics through cost-effective and portable technology. This work paves the way for next-generation biosensing, offering transformative potential in healthcare and other domains requiring rapid and reliable analyte detection. Full article
Show Figures

Figure 1

24 pages, 8881 KB  
Article
Research on Multimodal Control Method for Prosthetic Hands Based on Visuo-Tactile and Arm Motion Measurement
by Jianwei Cui and Bingyan Yan
Biomimetics 2024, 9(12), 775; https://doi.org/10.3390/biomimetics9120775 - 19 Dec 2024
Cited by 1 | Viewed by 1569
Abstract
The realization of hand function reengineering using a manipulator is a research hotspot in the field of robotics. In this paper, we propose a multimodal perception and control method for a robotic hand to assist the disabled. The movement of the human hand [...] Read more.
The realization of hand function reengineering using a manipulator is a research hotspot in the field of robotics. In this paper, we propose a multimodal perception and control method for a robotic hand to assist the disabled. The movement of the human hand can be divided into two parts: the coordination of the posture of the fingers, and the coordination of the timing of grasping and releasing objects. Therefore, we first used a pinhole camera to construct a visual device suitable for finger mounting, and preclassified the shape of the object based on YOLOv8; then, a filtering process using multi-frame synthesized point cloud data from miniature 2D Lidar, and DBSCAN algorithm clustering objects and the DTW algorithm, was proposed to further identify the cross-sectional shape and size of the grasped part of the object and realize control of the robot’s grasping gesture; finally, a multimodal perception and control method for prosthetic hands was proposed. To control the grasping attitude, a fusion algorithm based on information of upper limb motion state, hand position, and lesser toe haptics was proposed to realize control of the robotic grasping process with a human in the ring. The device designed in this paper does not contact the human skin, does not produce discomfort, and the completion rate of the grasping process experiment reached 91.63%, which indicates that the proposed control method has feasibility and applicability. Full article
(This article belongs to the Special Issue Bionic Technology—Robotic Exoskeletons and Prostheses: 2nd Edition)
Show Figures

Figure 1

Back to TopTop