Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,016)

Search Parameters:
Keywords = digital camera images

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 10439 KiB  
Article
Camera-Based Vital Sign Estimation Techniques and Mobile App Development
by Tae Wuk Bae, Young Choon Kim, In Ho Sohng and Kee Koo Kwon
Appl. Sci. 2025, 15(15), 8509; https://doi.org/10.3390/app15158509 (registering DOI) - 31 Jul 2025
Viewed by 38
Abstract
In this paper, we propose noncontact heart rate (HR), oxygen saturation (SpO2), and respiratory rate (RR) detection methods using a smartphone camera. HR frequency is detected through filtering after obtaining a remote PPG (rPPG) signal and its power spectral density (PSD) is detected [...] Read more.
In this paper, we propose noncontact heart rate (HR), oxygen saturation (SpO2), and respiratory rate (RR) detection methods using a smartphone camera. HR frequency is detected through filtering after obtaining a remote PPG (rPPG) signal and its power spectral density (PSD) is detected using color difference signal amplification and the plane-orthogonal-to-the-skin method. Additionally, the SpO2 is detected using the HR frequency and the absorption ratio of the G and B color channels based on oxyhemoglobin absorption and reflectance theory. After this, the respiratory frequency is detected using the PSD of rPPG through respiratory frequency band filtering. For the image sequences recorded under various imaging conditions, the proposed method demonstrated superior HR detection accuracy compared to existing methods. The confidence intervals for HR and SpO2 detection were analyzed using Bland–Altman plots. Furthermore, the proposed RR detection method was also verified to be reliable. Full article
Show Figures

Figure 1

17 pages, 13125 KiB  
Article
Evaluating the Accuracy and Repeatability of Mobile 3D Imaging Applications for Breast Phantom Reconstruction
by Elena Botti, Bart Jansen, Felipe Ballen-Moreno, Ayush Kapila and Redona Brahimetaj
Sensors 2025, 25(15), 4596; https://doi.org/10.3390/s25154596 - 24 Jul 2025
Viewed by 403
Abstract
Three-dimensional imaging technologies are increasingly used in breast reconstructive and plastic surgery due to their potential for efficient and accurate preoperative assessment and planning. This study systematically evaluates the accuracy and consistency of six commercially available 3D scanning applications (apps)—Structure Sensor, 3D Scanner [...] Read more.
Three-dimensional imaging technologies are increasingly used in breast reconstructive and plastic surgery due to their potential for efficient and accurate preoperative assessment and planning. This study systematically evaluates the accuracy and consistency of six commercially available 3D scanning applications (apps)—Structure Sensor, 3D Scanner App, Heges, Polycam, SureScan, and Kiri—in reconstructing the female torso. To avoid variability introduced by human subjects, a silicone breast mannequin model was scanned, with fiducial markers placed at known anatomical landmarks. Manual distance measurements were obtained using calipers by two independent evaluators and compared to digital measurements extracted from 3D reconstructions in Blender software. Each scan was repeated six times per application to ensure reliability. SureScan demonstrated the lowest mean error (2.9 mm), followed by Structure Sensor (3.0 mm), Heges (3.6 mm), 3D Scanner App (4.4 mm), Kiri (5.0 mm), and Polycam (21.4 mm), which showed the highest error and variability. Even the app using an external depth sensor (Structure Sensor) showed no statistically significant accuracy advantage over those using only the iPad’s built-in camera (except for Polycam), underscoring that software is the primary driver of performance, not hardware (alone). This work provides practical insights for selecting mobile 3D scanning tools in clinical workflows and highlights key limitations, such as scaling errors and alignment artifacts. Future work should include patient-based validation and explore deep learning to enhance reconstruction quality. Ultimately, this study lays the foundation for more accessible and cost-effective 3D imaging in surgical practice, showing that smartphone-based tools can produce clinically useful scans. Full article
(This article belongs to the Special Issue Biomedical Imaging, Sensing and Signal Processing)
Show Figures

Figure 1

14 pages, 2822 KiB  
Article
Accuracy and Reliability of Smartphone Versus Mirrorless Camera Images-Assisted Digital Shade Guides: An In Vitro Study
by Soo Teng Chew, Suet Yeo Soo, Mohd Zulkifli Kassim, Khai Yin Lim and In Meei Tew
Appl. Sci. 2025, 15(14), 8070; https://doi.org/10.3390/app15148070 - 20 Jul 2025
Viewed by 327
Abstract
Image-assisted digital shade guides are increasingly popular for shade matching; however, research on their accuracy remains limited. This study aimed to compare the accuracy and reliability of color coordination in image-assisted digital shade guides constructed using calibrated images of their shade tabs captured [...] Read more.
Image-assisted digital shade guides are increasingly popular for shade matching; however, research on their accuracy remains limited. This study aimed to compare the accuracy and reliability of color coordination in image-assisted digital shade guides constructed using calibrated images of their shade tabs captured by a mirrorless camera (Canon, Tokyo, Japan) (MC-DSG) and a smartphone camera (Samsung, Seoul, Korea) (SC-DSG), using a spectrophotometer as the reference standard. Twenty-nine VITA Linearguide 3D-Master shade tabs were photographed under controlled settings with both cameras equipped with cross-polarizing filters. Images were calibrated using Adobe Photoshop (Adobe Inc., San Jose, CA, USA). The L* (lightness), a* (red-green chromaticity), and b* (yellow-blue chromaticity) values, which represent the color attributes in the CIELAB color space, were computed at the middle third of each shade tab using Adobe Photoshop. Specifically, L* indicates the brightness of a color (ranging from black [0] to white [100]), a* denotes the position between red (+a*) and green (–a*), and b* represents the position between yellow (+b*) and blue (–b*). These values were used to quantify tooth shade and compare them to reference measurements obtained from a spectrophotometer (VITA Easyshade V, VITA Zahnfabrik, Bad Säckingen, Germany). Mean color differences (∆E00) between MC-DSG and SC-DSG, relative to the spectrophotometer, were compared using a independent t-test. The ∆E00 values were also evaluated against perceptibility (PT = 0.8) and acceptability (AT = 1.8) thresholds. Reliability was evaluated using intraclass correlation coefficients (ICC), and group differences were analyzed via one-way ANOVA and Bonferroni post hoc tests (α = 0.05). SC-DSG showed significantly lower ΔE00 deviations than MC-DSG (p < 0.001), falling within acceptable clinical AT. The L* values from MC-DSG were significantly higher than SC-DSG (p = 0.024). All methods showed excellent reliability (ICC > 0.9). The findings support the potential of smartphone image-assisted digital shade guides for accurate and reliable tooth shade assessment. Full article
(This article belongs to the Special Issue Advances in Dental Materials, Instruments, and Their New Applications)
Show Figures

Figure 1

17 pages, 2840 KiB  
Article
A Digital Twin System for the Sitting-to-Standing Motion of the Knee Joint
by Tian Liu, Liangzheng Sun, Chaoyue Sun, Zhijie Chen, Jian Li and Peng Su
Electronics 2025, 14(14), 2867; https://doi.org/10.3390/electronics14142867 - 18 Jul 2025
Viewed by 228
Abstract
(1) Background: A severe decline in knee joint function significantly affects the mobility of the elderly, making it a key concern in the field of geriatric health. To alleviate the pressure on the knee joints of the elderly during daily movements such as [...] Read more.
(1) Background: A severe decline in knee joint function significantly affects the mobility of the elderly, making it a key concern in the field of geriatric health. To alleviate the pressure on the knee joints of the elderly during daily movements such as sitting and standing, effective biomechanical solutions are required. (2) Methods: In this study, a biomechanical framework was established based on mechanical analysis to derive the transfer relationship between the ground reaction force and the knee joint moment. Experiments were designed to collect knee joint data on the elderly during the sit-to-stand process. Meanwhile, magnetic resonance imaging (MRI) images were processed through a medical imaging control system to construct a detailed digital 3D knee joint model. A finite element analysis was used to verify the model to ensure the accuracy of its structure and mechanical properties. An improved radial basis function was used to fit the pressure during the entire sit-to-stand conversion process to reduce the computational workload, with an error of less than 5%. In addition, a small-target human key point recognition network was developed to analyze the image sequences captured by the camera. The knee joint angle and the knee joint pressure distribution during the sit-to-stand conversion process were mapped to a three-dimensional interactive platform to form a digital twin system. (3) Results: The system can effectively capture the biomechanical behavior of the knee joint during movement and shows high accuracy in joint angle tracking and structure simulation. (4) Conclusions: This study provides an accurate and comprehensive method for analyzing the biomechanical characteristics of the knee joint during the movement of the elderly, laying a solid foundation for clinical rehabilitation research and the design of assistive devices in the field of rehabilitation medicine. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

18 pages, 8486 KiB  
Article
An Efficient Downwelling Light Sensor Data Correction Model for UAV Multi-Spectral Image DOM Generation
by Siyao Wu, Yanan Lu, Wei Fan, Shengmao Zhang, Zuli Wu and Fei Wang
Drones 2025, 9(7), 491; https://doi.org/10.3390/drones9070491 - 11 Jul 2025
Viewed by 210
Abstract
The downwelling light sensor (DLS) is the industry-standard solution for generating UAV-based digital orthophoto maps (DOMs). Current mainstream DLS correction methods primarily rely on angle compensation. However, due to the temporal mismatch between the DLS sampling intervals and the exposure times of multispectral [...] Read more.
The downwelling light sensor (DLS) is the industry-standard solution for generating UAV-based digital orthophoto maps (DOMs). Current mainstream DLS correction methods primarily rely on angle compensation. However, due to the temporal mismatch between the DLS sampling intervals and the exposure times of multispectral cameras, as well as external disturbances such as strong wind gusts and abrupt changes in flight attitude, DLS data often become unreliable, particularly at UAV turning points. Building upon traditional angle compensation methods, this study proposes an improved correction approach—FIM-DC (Fitting and Interpolation Model-based Data Correction)—specifically designed for data collection under clear-sky conditions and stable atmospheric illumination, with the goal of significantly enhancing the accuracy of reflectance retrieval. The method addresses three key issues: (1) field tests conducted in the Qingpu region show that FIM-DC markedly reduces the standard deviation of reflectance at tie points across multiple spectral bands and flight sessions, with the most substantial reduction from 15.07% to 0.58%; (2) it effectively mitigates inconsistencies in reflectance within image mosaics caused by anomalous DLS readings, thereby improving the uniformity of DOMs; and (3) FIM-DC accurately corrects the spectral curves of six land cover types in anomalous images, making them consistent with those from non-anomalous images. In summary, this study demonstrates that integrating FIM-DC into DLS data correction workflows for UAV-based multispectral imagery significantly enhances reflectance calculation accuracy and provides a robust solution for improving image quality under stable illumination conditions. Full article
Show Figures

Figure 1

19 pages, 4423 KiB  
Review
Laser Active Optical Systems (LAOSs) for Material Processing
by Vladimir Chvykov
Micromachines 2025, 16(7), 792; https://doi.org/10.3390/mi16070792 - 2 Jul 2025
Viewed by 457
Abstract
The output energy of Laser Active Optical Systems (LAOSs), in which image brightness is amplified within the laser-active medium, is always higher than the input energy. This contrasts with conventional optical systems (OSs). As a result, a LAOS enables the creation of laser [...] Read more.
The output energy of Laser Active Optical Systems (LAOSs), in which image brightness is amplified within the laser-active medium, is always higher than the input energy. This contrasts with conventional optical systems (OSs). As a result, a LAOS enables the creation of laser beams with tailored energy distribution across the aperture, making them ideal for material processing applications. This concept was first successfully implemented using metal vapor lasers as the gain medium. In these systems, material processing was achieved by using a laser beam that either carried the required energy profile or the image of the object itself. Later, other laser media were utilized for LAOSs, including barium vapor, strontium vapor, excimer XeCl lasers, and solid-state media. Additionally, during the development of these systems, several modifications were introduced. For example, Space-Time Light Modulators (STLMs) and CCD cameras were incorporated, along with the use of multipass amplifiers, disk-shaped or thin-disk (TD) solid-state laser amplifiers, and other advancements. These techniques have significantly expanded the range of power, energy, pulse durations, and operating wavelengths. Currently, TD laser amplifiers and STLMs based on Digital Light Processor (DLP) technology or Digital Micromirror Devices (DMDs) enhance the potential to develop LAOS devices for Subtractive and Additive Technologies (ST, AT), applicable in both macromachining (cutting, welding, drilling) and micro-nano processing. This review presents comparable characteristics and requirements for these various LAOS applications. Full article
(This article belongs to the Special Issue Optical and Laser Material Processing, 2nd Edition)
Show Figures

Figure 1

16 pages, 1012 KiB  
Article
Digital Dentistry and Imaging: Comparing the Performance of Smartphone and Professional Cameras for Clinical Use
by Omar Hasbini, Louis Hardan, Naji Kharouf, Carlos Enrique Cuevas-Suárez, Khalil Kharma, Carol Moussa, Nicolas Nassar, Aly Osman, Monika Lukomska-Szymanska, Youssef Haikel and Rim Bourgi
Prosthesis 2025, 7(4), 77; https://doi.org/10.3390/prosthesis7040077 - 2 Jul 2025
Viewed by 377
Abstract
Background: Digital dental photography is increasingly essential for documentation and smile design. This study aimed to compare the linear measurement accuracy of various smartphones and a Digital Single-Lens Reflex (DSLR) camera against digital models obtained by intraoral and desktop scanners. Methods: Tooth height [...] Read more.
Background: Digital dental photography is increasingly essential for documentation and smile design. This study aimed to compare the linear measurement accuracy of various smartphones and a Digital Single-Lens Reflex (DSLR) camera against digital models obtained by intraoral and desktop scanners. Methods: Tooth height and width from six different casts were measured and compared using images acquired with a Canon EOS 250D DSLR, six smartphone models (iPhone 13, iPhone 15, Samsung Galaxy S22 Ultra, Samsung Galaxy S23 Ultra, Samsung Galaxy S24, and Vivo T2), and digital scans obtained from the Helios 500 intraoral scanner and the Ceramill Map 600 desktop scanner. All image measurements were performed using ImageJ software (National Institutes of Health, Bethesda, MD, USA), and statistical analysis was conducted using one-way analysis of variance (ANOVA) with Tukey’s post hoc test (α = 0.05). Results: The results showed no significant differences in measurements across most imaging methods (p > 0.05), except for the Vivo T2, which showed a significant deviation (p < 0.05). The other smartphones produced measurements comparable to those of the DSLR, even at distances as close as 16 cm. Conclusions: These findings preliminary support the clinical use of smartphones for accurate dental documentation and two-dimensional smile design, including the posterior areas, and challenge the previously recommended 24 cm minimum distance for mobile dental photography (MDP). This provides clinicians with a simplified and accessible alternative for high-accuracy dental imaging, advancing the everyday use of MDP in clinical practice. Full article
Show Figures

Figure 1

15 pages, 1887 KiB  
Article
Multispectral Reconstruction in Open Environments Based on Image Color Correction
by Jinxing Liang, Xin Hu, Yifan Li and Kaida Xiao
Electronics 2025, 14(13), 2632; https://doi.org/10.3390/electronics14132632 - 29 Jun 2025
Viewed by 200
Abstract
Spectral reconstruction based on digital imaging has become an important way to obtain spectral images with high spatial resolution. The current research has made great strides in the laboratory; however, dealing with rapidly changing light sources, illumination, and imaging parameters in an open [...] Read more.
Spectral reconstruction based on digital imaging has become an important way to obtain spectral images with high spatial resolution. The current research has made great strides in the laboratory; however, dealing with rapidly changing light sources, illumination, and imaging parameters in an open environment presents significant challenges for spectral reconstruction. This is because a spectral reconstruction model established under one set of imaging conditions is not suitable for use under different imaging conditions. In this study, considering the principle of multispectral reconstruction, we proposed a method of multispectral reconstruction in open environments based on image color correction. In the proposed method, a whiteboard is used as a medium to calculate the color correction matrices from an open environment and transfer them to the laboratory. After the digital image is corrected, its multispectral image can be reconstructed using the pre-established multispectral reconstruction model in the laboratory. The proposed method was tested in simulations and practical experiments using different datasets and illuminations. The results show that the root-mean-square error of the color chart is below 2.6% in the simulation experiment and below 6.0% in the practical experiment, which illustrates the efficiency of the proposed method. Full article
(This article belongs to the Special Issue Image Fusion and Image Processing)
Show Figures

Figure 1

18 pages, 6678 KiB  
Article
HIEN: A Hybrid Interaction Enhanced Network for Horse Iris Super-Resolution
by Ao Zhang, Bin Guo, Xing Liu and Wei Liu
Appl. Sci. 2025, 15(13), 7191; https://doi.org/10.3390/app15137191 - 26 Jun 2025
Viewed by 256
Abstract
Horse iris recognition is a non-invasive identification method with great potential for precise management in intelligent horse farms. However, horses’ natural vigilance often leads to stress and resistance when exposed to close-range infrared cameras. This behavior makes it challenging to capture clear iris [...] Read more.
Horse iris recognition is a non-invasive identification method with great potential for precise management in intelligent horse farms. However, horses’ natural vigilance often leads to stress and resistance when exposed to close-range infrared cameras. This behavior makes it challenging to capture clear iris images, thereby reducing recognition performance. This paper addresses the challenge of generating high-resolution iris images from existing low-resolution counterparts. To this end, we propose a novel hybrid-architecture image super-resolution (SR) network. Central to our approach is the design of Paired Asymmetric Transformer Block (PATB), which incorporates Contextual Query Generator (CQG) to efficiently capture contextual information and model global feature interactions. Furthermore, we introduce an Efficient Residual Dense Block (ERDB), specifically engineered to effectively extract finer-grained local features inherent in the image data. By integrating PATB and ERDB, our network achieves superior fusion of global contextual awareness and local detail information, thereby significantly enhancing the reconstruction quality of horse iris images. Experimental evaluations on our self-constructed dataset of horse irises demonstrate the effectiveness of the proposed method. In terms of standard image quality metrics, it achieves the PSNR of 30.5988 dB and SSIM of 0.8552. Moreover, in terms of identity-recognition performance, the method achieves Precision, Recall, and F1-Score of 81.48%, 74.38%, and 77.77%, respectively. This study provides a useful contribution to digital horse farm management and supports the ongoing development of smart animal husbandry. Full article
Show Figures

Figure 1

16 pages, 10517 KiB  
Article
Beyond the Light Meter: A Case-Study on HDR-Derived Illuminance Calculations Using a Proxy-Lambertian Surface
by Jackson Hanus, Arpan Guha and Abdourahim Barry
Buildings 2025, 15(12), 2131; https://doi.org/10.3390/buildings15122131 - 19 Jun 2025
Viewed by 381
Abstract
Accurate illuminance measurements are critical in assessing lighting quality during post-occupancy evaluations, and traditional methods are labor-intensive and time-consuming. This pilot study demonstrates an alternative that combines high dynamic range (HDR) imaging with a low-cost proxy-Lambertian surface to transform image luminance into spatial [...] Read more.
Accurate illuminance measurements are critical in assessing lighting quality during post-occupancy evaluations, and traditional methods are labor-intensive and time-consuming. This pilot study demonstrates an alternative that combines high dynamic range (HDR) imaging with a low-cost proxy-Lambertian surface to transform image luminance into spatial illuminance. Seven readily available materials were screened for luminance uniformity; the specimen with minimal deviation from Lambertian behavior (≈2%) was adopted as the pseudo-Lambertian surface. Calibrated HDR images of a fluorescent-lit university classroom were acquired with a digital single-lens reflex (DSLR) camera and processed in Photosphere, after which pixel luminance was converted to illuminance via Lambertian approximation. Predicted illuminance values were benchmarked against spectral illuminance meter readings at 42 locations on horizontal work planes, vertical presentation surfaces, and the circulation floor. The average errors were 5.20% for desks and 6.40% for the whiteboard—well below the 10% acceptance threshold for design validation—while the projector-screen and floor measurements exhibited slightly higher discrepancies of 9.90% and 14.40%, respectively. The proposed workflow significantly reduces the cost, complexity, and duration of lighting assessments, presenting a promising tool for streamlined, accurate post-occupancy evaluations. Future work may focus on refining this approach for diverse lighting conditions and complex material interactions. Full article
(This article belongs to the Special Issue Lighting in Buildings—2nd Edition)
Show Figures

Figure 1

28 pages, 1707 KiB  
Review
Video Stabilization: A Comprehensive Survey from Classical Mechanics to Deep Learning Paradigms
by Qian Xu, Qian Huang, Chuanxu Jiang, Xin Li and Yiming Wang
Modelling 2025, 6(2), 49; https://doi.org/10.3390/modelling6020049 - 17 Jun 2025
Viewed by 912
Abstract
Video stabilization is a critical technology for enhancing video quality by eliminating or reducing image instability caused by camera shake, thereby improving the visual viewing experience. It has deeply integrated into diverse applications—including handheld recording, UAV aerial photography, and vehicle-mounted surveillance. Propelled by [...] Read more.
Video stabilization is a critical technology for enhancing video quality by eliminating or reducing image instability caused by camera shake, thereby improving the visual viewing experience. It has deeply integrated into diverse applications—including handheld recording, UAV aerial photography, and vehicle-mounted surveillance. Propelled by advances in deep learning, data-driven stabilization methods have emerged as prominent solutions, demonstrating superior efficacy in handling jitter while achieving enhanced processing efficiency. This review systematically examines the field of video stabilization. First, this paper delineates the paradigm shift from classical to deep learning-based approaches. Subsequently, it elucidates conventional digital stabilization frameworks and their deep learning counterparts along with establishing standardized assessment metrics and benchmark datasets for comparative analysis. Finally, this review addresses critical challenges such as robustness limitations in complex motion scenarios and latency constraints in real-time processing. By integrating interdisciplinary perspectives, this work provides scholars with academically rigorous and practically relevant insights to advance video stabilization research. Full article
Show Figures

Graphical abstract

24 pages, 4250 KiB  
Article
Joint Exploitation of Physical-Layer and Artificial Features for Privacy-Preserving Distributed Source Camera Identification
by Hui Tian, Haibao Chen, Yuyan Zhao and Jiawei Zhang
Future Internet 2025, 17(6), 260; https://doi.org/10.3390/fi17060260 - 13 Jun 2025
Cited by 1 | Viewed by 331
Abstract
Identifying the source camera of a digital image is a critical task for ensuring image authenticity. In this paper, we propose a novel privacy-preserving distributed source camera identification scheme that jointly exploits both physical-layer fingerprint features and a carefully designed artificial tag. Specifically, [...] Read more.
Identifying the source camera of a digital image is a critical task for ensuring image authenticity. In this paper, we propose a novel privacy-preserving distributed source camera identification scheme that jointly exploits both physical-layer fingerprint features and a carefully designed artificial tag. Specifically, we build a hybrid fingerprint model by combining sensor level hardware fingerprints with artificial tag features to characterize the unique identity of the camera in a digital image. To address privacy concerns, the proposed scheme incorporates a privacy-preserving strategy that encrypts not only the hybrid fingerprint parameters, but also the image content itself. Furthermore, within the distributed framework, the identification task performed by a single secondary user is formulated as a binary hypothesis testing problem. Experimental results demonstrated the effectiveness of the proposed scheme in accurately identifying source cameras, particularly under complex conditions such as those involving images processed by social media platforms. Notably, for social media platform identification, our method achieved average accuracy improvements of 7.19% on the Vision dataset and 8.87% on the Forchheim dataset compared to a representative baseline. Full article
Show Figures

Figure 1

6 pages, 2735 KiB  
Proceeding Paper
Digital Imaging Inspection System for Aluminum Case Grinding Quality Control of Solid-State Drive
by Chun-Jen Chen and Cheng-Feng Tsai
Eng. Proc. 2025, 92(1), 96; https://doi.org/10.3390/engproc2025092096 - 11 Jun 2025
Viewed by 322
Abstract
The enterprise or data center does not use the M2 SATA because of the cooling problem. Therefore, SSDs employ metal cases similar to the traditional 2.5” or 3.5” hard disk. The metal case is made of aluminum, which must be ground after the [...] Read more.
The enterprise or data center does not use the M2 SATA because of the cooling problem. Therefore, SSDs employ metal cases similar to the traditional 2.5” or 3.5” hard disk. The metal case is made of aluminum, which must be ground after the metal plate forming process. Conventionally, quality control is conducted to check the ground quality of aluminum cases manually. This method is not accurate as the data are difficult to digitize. To improve the quality control, speed, and efficiency. We established a digital imaging-based inspection system for the aluminum case grinding quality control. The inspection system consists of a digital industrial camera, a closed-circuit TV lens, a light-emitting diode (LED) light source, and a personal computer. If the loading and unloading time is ignored, the test time is less than five seconds for one case. When the tested case is uploaded to the inspection system, the camera captures and sends images to the computer. The image was processed to evaluate the quality and record the tested results. Then, the tested case is classified by a robot or an operator. Full article
(This article belongs to the Proceedings of 2024 IEEE 6th Eurasia Conference on IoT, Communication and Engineering)
Show Figures

Figure 1

22 pages, 18370 KiB  
Article
Digital Domain TDI-CMOS Imaging Based on Minimum Search Domain Alignment
by Han Liu, Shuping Tao, Qinping Feng and Zongxuan Li
Sensors 2025, 25(11), 3490; https://doi.org/10.3390/s25113490 - 31 May 2025
Viewed by 559
Abstract
In this study, we propose a digital domain TDI-CMOS dynamic imaging method based on minimum search domain alignment, which consists of five steps: image-motion vector computation, image jitter estimation, feature pair matching, global displacement estimation, and TDI accumulation. To solve the challenge of [...] Read more.
In this study, we propose a digital domain TDI-CMOS dynamic imaging method based on minimum search domain alignment, which consists of five steps: image-motion vector computation, image jitter estimation, feature pair matching, global displacement estimation, and TDI accumulation. To solve the challenge of matching feature point pairs in dark and low-contrast images, our method first optimizes the size and position of the search box using an image motion compensation mathematical model and a satellite platform jitter model. Then, the feature point pairs that best match the extracted feature points of the reference frame are identified within the search box of the target frame. After that, a kernel density estimation algorithm is proposed for calculating the displacement probability density of each feature point pair to fit the actual displacement between two frames. Finally, we align and superimpose all the frames in the digital domain to generate a delayed integral image. Experimental results show that this method greatly improves the alignment speed and accuracy of dark and low-contrast images during dynamic imaging. It effectively mitigates the effects of image motion and jitter from the spatial camera, and the fitted global image motion error is kept below 0.01 pixels, which is compensated to improve the MTF coefficient of the image motion and jitter link to 0.68, thus improving the imaging quality of TDI. Full article
Show Figures

Figure 1

14 pages, 5528 KiB  
Article
From Google Earth Studio to Hologram: A Pipeline for Architectural Visualization
by Philippe Gentet, Tam Le Phuc Do, Jumamurod Farhod Ugli Aralov, Oybek Mirzaevich Narzulloev, Leehwan Hwang and Seunghyun Lee
Appl. Sci. 2025, 15(11), 6179; https://doi.org/10.3390/app15116179 - 30 May 2025
Viewed by 574
Abstract
High-resolution holographic visualization of built environments remains largely inaccessible due to the complexity and technical demands of traditional 3D data acquisition processes. This study proposes a workflow for producing high-quality full-color digital holographic stereograms of architectural landmarks using Google Earth Studio. By leveraging [...] Read more.
High-resolution holographic visualization of built environments remains largely inaccessible due to the complexity and technical demands of traditional 3D data acquisition processes. This study proposes a workflow for producing high-quality full-color digital holographic stereograms of architectural landmarks using Google Earth Studio. By leveraging photogrammetrically reconstructed three-dimensional (3D) city models and a controlled camera path, we generated perspective image sequences of two iconic monuments, that is, the Basílica de la Sagrada Família (Barcelona, Spain) and the Arc de Triomphe (Paris, France). A custom pipeline was implemented to compute keyframe coordinates, extract cinematic image sequences, and convert them into histogram data suitable for CHIMERA holographic printing. The holograms were recorded on Ultimate U04 silver halide plates and illuminated with RGB light-emitting diodes, yielding visually immersive reconstructions with strong parallax effects and color fidelity. This method circumvented the requirement for physical 3D scanning, thereby enabling scalable and cost-effective holography using publicly available 3D datasets. In conclusion, the findings indicate the potential of combining Earth Studio with digital holography for urban visualization, cultural heritage preservation, and educational displays. Full article
(This article belongs to the Topic 3D Documentation of Natural and Cultural Heritage)
Show Figures

Figure 1

Back to TopTop