Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (51)

Search Parameters:
Keywords = luminance reconstruction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 3856 KiB  
Article
Wavelet Fusion with Sobel-Based Weighting for Enhanced Clarity in Underwater Hydraulic Infrastructure Inspection
by Minghui Zhang, Jingkui Zhang, Jugang Luo, Jiakun Hu, Xiaoping Zhang and Juncai Xu
Appl. Sci. 2025, 15(14), 8037; https://doi.org/10.3390/app15148037 - 18 Jul 2025
Viewed by 301
Abstract
Underwater inspection images of hydraulic structures often suffer from haze, severe color distortion, low contrast, and blurred textures, impairing the accuracy of automated crack, spalling, and corrosion detection. However, many existing enhancement methods fail to preserve structural details and suppress noise in turbid [...] Read more.
Underwater inspection images of hydraulic structures often suffer from haze, severe color distortion, low contrast, and blurred textures, impairing the accuracy of automated crack, spalling, and corrosion detection. However, many existing enhancement methods fail to preserve structural details and suppress noise in turbid environments. To address these limitations, we propose a compact image enhancement framework called Wavelet Fusion with Sobel-based Weighting (WWSF). This method first corrects global color and luminance distributions using multiscale Retinex and gamma mapping, followed by local contrast enhancement via CLAHE in the L channel of the CIELAB color space. Two preliminarily corrected images are decomposed using discrete wavelet transform (DWT); low-frequency bands are fused based on maximum energy, while high-frequency bands are adaptively weighted by Sobel edge energy to highlight structural features and suppress background noise. The enhanced image is reconstructed via inverse DWT. Experiments on real-world sluice gate datasets demonstrate that WWSF outperforms six state-of-the-art methods, achieving the highest scores on UIQM and AG while remaining competitive on entropy (EN). Moreover, the method retains strong robustness under high turbidity conditions (T ≥ 35 NTU), producing sharper edges, more faithful color representation, and improved texture clarity. These results indicate that WWSF is an effective preprocessing tool for downstream tasks such as segmentation, defect classification, and condition assessment of hydraulic infrastructure in complex underwater environments. Full article
Show Figures

Figure 1

23 pages, 7532 KiB  
Article
Real-Time Aerial Multispectral Object Detection with Dynamic Modality-Balanced Pixel-Level Fusion
by Zhe Wang and Qingling Zhang
Sensors 2025, 25(10), 3039; https://doi.org/10.3390/s25103039 - 12 May 2025
Viewed by 736
Abstract
Aerial object detection plays a critical role in numerous fields, utilizing the flexibility of airborne platforms to achieve real-time tasks. Combining visible and infrared sensors can overcome limitations under low-light conditions, enabling full-time tasks. While feature-level fusion methods exhibit comparable performances in visible–infrared [...] Read more.
Aerial object detection plays a critical role in numerous fields, utilizing the flexibility of airborne platforms to achieve real-time tasks. Combining visible and infrared sensors can overcome limitations under low-light conditions, enabling full-time tasks. While feature-level fusion methods exhibit comparable performances in visible–infrared multispectral object detection, they suffer from heavy model size, inadequate inference speed and visible light preferences caused by inherent modality imbalance, limiting their applications in airborne platform deployment. To address these challenges, this paper proposes a YOLO-based real-time multispectral fusion framework combining pixel-level fusion with dynamic modality-balanced augmentation called Full-time Multispectral Pixel-wise Fusion Network (FMPFNet). Firstly, we introduce the Multispectral Luminance Weighted Fusion (MLWF) module consisting of attention-based modality reconstruction and feature fusion. By leveraging YUV color space transformation, this module efficiently fuses RGB and IR modalities while minimizing computational overhead. We also propose the Dynamic Modality Dropout and Threshold Masking (DMDTM) strategy, which balances modality attention and improves detection performance in low-light scenarios. Additionally, we refine our model to enhance the detection of small rotated objects, a requirement commonly encountered in aerial detection applications. Experimental results on the DroneVehicle dataset demonstrate that our FMPFNet achieves 76.80% mAP50 and 132 FPS, outperforming state-of-the-art feature-level fusion methods in both accuracy and inference speed. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

14 pages, 1662 KiB  
Article
Morphometry of Intracranial Carotid Artery Calcifications in Patients with Recent Cerebral Ischemia
by Bernhard P. Berghout, Federica Fontana, Fennika Huijben, Suze-Anne Korteland, M. Eline Kooi, Paul J. Nederkoorn, Pim A. de Jong, Frank J. Gijsen, Selene Pirola, M. Kamran Ikram, Daniel Bos and Ali C. Akyildiz
J. Clin. Med. 2025, 14(10), 3274; https://doi.org/10.3390/jcm14103274 - 8 May 2025
Viewed by 753
Abstract
Background: Intracranial artery calcification detected on CT imaging is a recognized risk factor for ischemic cerebrovascular diseases, but the underlying etiology of this association remains unclear. Differences in objective morphometric characteristics of these calcifications may partially explain this association, yet these measurements [...] Read more.
Background: Intracranial artery calcification detected on CT imaging is a recognized risk factor for ischemic cerebrovascular diseases, but the underlying etiology of this association remains unclear. Differences in objective morphometric characteristics of these calcifications may partially explain this association, yet these measurements are largely absent in the literature. We investigated intracranial artery calcification morphometry in patients with recent anterior ischemic stroke or TIA, assessing potential differences between calcifications in both intracranial carotid arteries (ICAs) located ipsilateral and contralateral to the cerebral ischemia. Methods: Among 100 patients (mean age 69.6 (SD 8.8) years) presenting to academic neurology departments, 3D reconstructions of both ICAs were based on clinical CT-angiography images. On these reconstructions, a luminal centerline and cross-sections perpendicular to this centerline were created, facilitating the assessment of calcification morphometry, spatial orientation and stenosis severity. Differences in calcification characteristics between ICAs were assessed using two-sided Wilcoxon signed-rank and χ2 tests. Results: Among 200 arteries, a median of four (IQR 2–6) individual calcifications were counted, with a mean area of 1.8 (IQR 1.2–2.7) mm2, a mean arc width of 43.5 (IQR 32.3–53.2) degrees, and median longitudinal extent of 15.4 (IQR 5.9–27.0) mm. Calcifications were most often present in the anatomical C4 section (56.0%), with predominantly posterosuperior orientation (38.5%) and 42.0% had a local stenosis severity > 70%. None of these aspects significantly differed between ICAs, and this remained after restricting analyses to patients with undetermined etiology. Conclusions: We found no differences in morphometrical or spatial aspects of calcifications between ICAs ipsilateral and contralateral to the cerebral ischemia. Full article
(This article belongs to the Special Issue New Insights into Brain Calcification)
Show Figures

Figure 1

20 pages, 49431 KiB  
Article
Generative Adversarial Network-Based Lightweight High-Dynamic-Range Image Reconstruction Model
by Gustavo de Souza Ferreti, Thuanne Paixão and Ana Beatriz Alvarez
Appl. Sci. 2025, 15(9), 4801; https://doi.org/10.3390/app15094801 - 25 Apr 2025
Cited by 1 | Viewed by 661
Abstract
The generation of High-Dynamic-Range (HDR) images is essential for capturing details at various brightness levels, but current reconstruction methods, using deep learning techniques, often require significant computational resources, limiting their applicability on devices with moderate resources. In this context, this paper presents a [...] Read more.
The generation of High-Dynamic-Range (HDR) images is essential for capturing details at various brightness levels, but current reconstruction methods, using deep learning techniques, often require significant computational resources, limiting their applicability on devices with moderate resources. In this context, this paper presents a lightweight architecture for reconstructing HDR images from three Low-Dynamic-Range inputs. The proposed model is based on Generative Adversarial Networks and replaces traditional convolutions with depthwise separable convolutions, reducing the number of parameters while maintaining high visual quality and minimizing luminance artifacts. The evaluation of the proposal is conducted through quantitative, qualitative, and computational cost analyses based on the number of parameters and FLOPs. Regarding the qualitative analysis, a comparison between the models was performed using samples that present reconstruction challenges. The proposed model achieves a PSNR-μ of 43.51 dB and SSIM-μ of 0.9917, achieving competitive quality metrics comparable to HDR-GAN while reducing the computational cost by 6× in FLOPs and 7× in the number of parameters, using approximately half the GPU memory consumption, demonstrating an effective balance between visual fidelity and efficiency. Full article
(This article belongs to the Special Issue Advances in Image Recognition and Processing Technologies)
Show Figures

Figure 1

29 pages, 6510 KiB  
Article
Energy-Efficient Design of Immigrant Resettlement Housing in Qinghai: Solar Energy Utilization, Sunspace Temperature Control, and Envelope Optimization
by Bo Liu, Yu Liu, Qianlong Xin, Xiaomei Kou and Jie Song
Buildings 2025, 15(9), 1434; https://doi.org/10.3390/buildings15091434 - 24 Apr 2025
Cited by 1 | Viewed by 459
Abstract
Qinghai Province urgently requires the development of adaptive energy-efficient rural housing construction to address resettlement needs arising from hydropower projects, given the region’s characteristic combination of high solar irradiance resources and severe cold climate conditions. This research establishes localized retrofit strategies through systematic [...] Read more.
Qinghai Province urgently requires the development of adaptive energy-efficient rural housing construction to address resettlement needs arising from hydropower projects, given the region’s characteristic combination of high solar irradiance resources and severe cold climate conditions. This research establishes localized retrofit strategies through systematic field investigations and Rhinoceros modeling simulations of five representative rural residences across four villages. The key findings reveal that comprehensive building envelope retrofits achieve an 80% reduction in energy consumption. South-facing sunspaces demonstrate effective thermal buffering capacity, though their spatial depth exhibits negligible correlation with heating energy requirements. An optimized hybrid shading system combining roof overhangs and vertical louvers demonstrates critical efficacy in summer overheating mitigation, with vertical louvers demonstrating superior thermal and luminous regulation precision. Architectural orientation analysis identifies an optimal alignment within ±10° of true south, emphasizing the functional zoning principle of positioning primary living spaces in south-oriented ground floor areas while locating auxiliary functions in northeastern/northwestern zones. The integrated design framework synergizes three core components: passive solar optimization, climate-responsive shading mechanisms, and performance-enhanced envelope systems, achieving simultaneous improvements in energy efficiency and thermal comfort within resettlement housing constraints. This methodology establishes a replicable paradigm for climate-resilient rural architecture in high-altitude, solar-intensive cold regions, effectively reconciling community reconstruction needs with low-carbon development imperatives through context-specific technical solutions. Full article
Show Figures

Figure 1

12 pages, 3195 KiB  
Article
Subtraction CT Angiography for the Evaluation of Lower Extremity Artery Disease with Severe Arterial Calcification
by Ryoichi Tanaka and Kunihiro Yoshioka
J. Cardiovasc. Dev. Dis. 2025, 12(4), 131; https://doi.org/10.3390/jcdd12040131 - 2 Apr 2025
Cited by 1 | Viewed by 857
Abstract
(1) Background: Peripheral arterial CT angiography (CTA) is an alternative to conventional angiography for diagnosing lower extremity artery disease (LEAD). However, severe arterial calcifications often hinder accurate assessment of arterial stenosis. This study evaluated the diagnostic performance of subtraction CTA with volume position [...] Read more.
(1) Background: Peripheral arterial CT angiography (CTA) is an alternative to conventional angiography for diagnosing lower extremity artery disease (LEAD). However, severe arterial calcifications often hinder accurate assessment of arterial stenosis. This study evaluated the diagnostic performance of subtraction CTA with volume position matching compared to conventional CTA, using invasive digital subtraction angiography (DSA) as the gold standard. (2) Methods: Thirty-two patients with LEAD (mean age: 69.6 ± 10.8 years; M/F = 28:4) underwent subtraction CTA and DSA. The arterial tree was divided into 20 segments per patient, excluding segments with a history of bypass surgery. Subtraction was performed separately for each limb using volume position matching. Maximum intensity projections were reconstructed from both conventional and subtraction CTA data. Percent stenosis per arterial segment was measured using calipers and compared with DSA. Segments were classified as stenotic (>50% luminal narrowing) or not, with heavily calcified or stented segments assigned as incorrect. (3) Results: Of 640 segments, 636 were analyzed. Subtraction CTA and conventional CTA left 13 (2.0%) and 160 (25.2%) segments uninterpretable, respectively. Diagnostic accuracies (accuracy, precision, recall, macro F1 score) for subtraction CTA were 0.885, 0.884, 0.936, and 0.909, compared to 0.657, 0.744, 0.675, and 0.708 for conventional CTA. (4) Conclusions: Subtraction CTA with volume position matching is feasible and achieves high diagnostic accuracy in patients with severe calcific sclerosis. Full article
(This article belongs to the Special Issue Clinical Applications of Cardiovascular Computed Tomography (CT))
Show Figures

Figure 1

22 pages, 5386 KiB  
Article
A Novel Multi-Sensor Nonlinear Tightly-Coupled Framework for Composite Robot Localization and Mapping
by Lu Chen, Amir Hussain, Yu Liu, Jie Tan, Yang Li, Yuhao Yang, Haoyuan Ma, Shenbing Fu and Gun Li
Sensors 2024, 24(22), 7381; https://doi.org/10.3390/s24227381 - 19 Nov 2024
Cited by 3 | Viewed by 1332
Abstract
Composite robots often encounter difficulties due to changes in illumination, external disturbances, reflective surface effects, and cumulative errors. These challenges significantly hinder their capabilities in environmental perception and the accuracy and reliability of pose estimation. We propose a nonlinear optimization approach to overcome [...] Read more.
Composite robots often encounter difficulties due to changes in illumination, external disturbances, reflective surface effects, and cumulative errors. These challenges significantly hinder their capabilities in environmental perception and the accuracy and reliability of pose estimation. We propose a nonlinear optimization approach to overcome these issues to develop an integrated localization and navigation framework, IIVL-LM (IMU, Infrared, Vision, and LiDAR Fusion for Localization and Mapping). This framework achieves tightly coupled integration at the data level using inputs from an IMU (Inertial Measurement Unit), an infrared camera, an RGB (Red, Green and Blue) camera, and LiDAR. We propose a real-time luminance calculation model and verify its conversion accuracy. Additionally, we designed a fast approximation method for the nonlinear weighted fusion of features from infrared and RGB frames based on luminance values. Finally, we optimize the VIO (Visual-Inertial Odometry) module in the R3LIVE++ (Robust, Real-time, Radiance Reconstruction with LiDAR-Inertial-Visual state Estimation) framework based on the infrared camera’s capability to acquire depth information. In a controlled study, using a simulated indoor rescue scenario dataset, the IIVL-LM system demonstrated significant performance enhancements in challenging luminance conditions, particularly in low-light environments. Specifically, the average RMSE ATE (Root Mean Square Error of absolute trajectory Error) improved by 23% to 39%, with reductions from 0.006 to 0.013. At the same time, we conducted comparative experiments using the publicly available TUM-VI (Technical University of Munich Visual-Inertial Dataset) without the infrared image input. It was found that no leading results were achieved, which verifies the importance of infrared image fusion. By maintaining the active engagement of at least three sensors at all times, the IIVL-LM system significantly boosts its robustness in both unknown and expansive environments while ensuring high precision. This enhancement is particularly critical for applications in complex environments, such as indoor rescue operations. Full article
(This article belongs to the Special Issue New Trends in Optical Imaging and Sensing Technologies)
Show Figures

Figure 1

15 pages, 7263 KiB  
Article
Reconstructing High Dynamic Range Image from a Single Low Dynamic Range Image Using Histogram Learning
by Huei-Yung Lin, Yi-Rung Lin, Wen-Chieh Lin and Chin-Chen Chang
Appl. Sci. 2024, 14(21), 9847; https://doi.org/10.3390/app14219847 - 28 Oct 2024
Viewed by 1774
Abstract
High dynamic range imaging is an important field in computer vision. Compared with general low dynamic range (LDR) images, high dynamic range (HDR) images represent a larger luminance range, making the images closer to the real scene. In this paper, we propose an [...] Read more.
High dynamic range imaging is an important field in computer vision. Compared with general low dynamic range (LDR) images, high dynamic range (HDR) images represent a larger luminance range, making the images closer to the real scene. In this paper, we propose an approach for HDR image reconstruction from a single LDR image based on histogram learning. First, the dynamic range of an LDR image is expanded to an extended dynamic range (EDR) image. Then, histogram learning is established to predict the intensity distribution of an HDR image of the EDR image. Next, we use histogram matching to reallocate pixel intensities. The final HDR image is generated through regional adjustment using reinforcement learning. By decomposing low-frequency and high-frequency information, the proposed network can predict the lost high-frequency details while expanding the intensity ranges. We conduct the experiments based on HDR-Real and HDR-EYE datasets. The quantitative and qualitative evaluations have demonstrated the effectiveness of the proposed approach compared to the previous methods. Full article
Show Figures

Figure 1

21 pages, 56384 KiB  
Article
Underwater Image Enhancement Based on Luminance Reconstruction by Multi-Resolution Fusion of RGB Channels
by Yi Wang, Zhihua Chen, Guoxu Yan, Jiarui Zhang and Bo Hu
Sensors 2024, 24(17), 5776; https://doi.org/10.3390/s24175776 - 5 Sep 2024
Cited by 1 | Viewed by 1592
Abstract
Underwater image enhancement technology is crucial for the human exploration and exploitation of marine resources. The visibility of underwater images is affected by visible light attenuation. This paper proposes an image reconstruction method based on the decomposition–fusion of multi-channel luminance data to enhance [...] Read more.
Underwater image enhancement technology is crucial for the human exploration and exploitation of marine resources. The visibility of underwater images is affected by visible light attenuation. This paper proposes an image reconstruction method based on the decomposition–fusion of multi-channel luminance data to enhance the visibility of underwater images. The proposed method is a single-image approach to cope with the condition that underwater paired images are difficult to obtain. The original image is first divided into its three RGB channels. To reduce artifacts and inconsistencies in the fused images, a multi-resolution fusion process based on the Laplace–Gaussian pyramid guided by a weight map is employed. Image saliency analysis and mask sharpening methods are also introduced to color-correct the fused images. The results indicate that the method presented in this paper effectively enhances the visibility of dark regions in the original image and globally improves its color, contrast, and sharpness compared to current state-of-the-art methods. Our method can enhance underwater images in engineering practice, laying the foundation for in-depth research on underwater images. Full article
(This article belongs to the Special Issue Underwater Vision Sensing System)
Show Figures

Figure 1

17 pages, 1003 KiB  
Article
Autoencoder-Based Unsupervised Surface Defect Detection Using Two-Stage Training
by Tesfaye Getachew Shiferaw and Li Yao
J. Imaging 2024, 10(5), 111; https://doi.org/10.3390/jimaging10050111 - 5 May 2024
Cited by 7 | Viewed by 4620
Abstract
Accurately detecting defects while reconstructing a high-quality normal background in surface defect detection using unsupervised methods remains a significant challenge. This study proposes an unsupervised method that effectively addresses this challenge by achieving both accurate defect detection and a high-quality normal background reconstruction [...] Read more.
Accurately detecting defects while reconstructing a high-quality normal background in surface defect detection using unsupervised methods remains a significant challenge. This study proposes an unsupervised method that effectively addresses this challenge by achieving both accurate defect detection and a high-quality normal background reconstruction without noise. We propose an adaptive weighted structural similarity (AW-SSIM) loss for focused feature learning. AW-SSIM improves structural similarity (SSIM) loss by assigning different weights to its sub-functions of luminance, contrast, and structure based on their relative importance for a specific training sample. Moreover, it dynamically adjusts the Gaussian window’s standard deviation (σ) during loss calculation to balance noise reduction and detail preservation. An artificial defect generation algorithm (ADGA) is proposed to generate an artificial defect closely resembling real ones. We use a two-stage training strategy. In the first stage, the model trains only on normal samples using AW-SSIM loss, allowing it to learn robust representations of normal features. In the second stage of training, the weights obtained from the first stage are used to train the model on both normal and artificially defective training samples. Additionally, the second stage employs a combined learned Perceptual Image Patch Similarity (LPIPS) and AW-SSIM loss. The combined loss helps the model in achieving high-quality normal background reconstruction while maintaining accurate defect detection. Extensive experimental results demonstrate that our proposed method achieves a state-of-the-art defect detection accuracy. The proposed method achieved an average area under the receiver operating characteristic curve (AuROC) of 97.69% on six samples from the MVTec anomaly detection dataset. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Figure 1

23 pages, 5602 KiB  
Article
Traditional Clinicopathological Biomarkers Still Determine Disease-Free and Overall Survival in Invasive Breast Cancer Patients: A Pilot Study
by Katarzyna Wrzeszcz, Katarzyna Kwiatkowska, Piotr Rhone, Dorota Formanowicz, Stefan Kruszewski and Barbara Ruszkowska-Ciastek
J. Clin. Med. 2024, 13(7), 2021; https://doi.org/10.3390/jcm13072021 - 30 Mar 2024
Cited by 2 | Viewed by 1620
Abstract
Background: Molecular classification, tumor diameter, Ki67 expression, and brachytherapy administration still act as the most potent potential predictors of breast cancer recurrence and overall survival. Methods: Over the period of 23 months, we included in the study 92 invasive breast cancer (IBrC) patients [...] Read more.
Background: Molecular classification, tumor diameter, Ki67 expression, and brachytherapy administration still act as the most potent potential predictors of breast cancer recurrence and overall survival. Methods: Over the period of 23 months, we included in the study 92 invasive breast cancer (IBrC) patients initially diagnosed at the Clinical Ward of Breast Cancer and Reconstructive Surgery, Oncology Center in Bydgoszcz, Poland. The probability of disease-free survival (DFS) and overall survival (OS) in relation to potential prognostic factors for the patients were determined using a Kaplan–Meier analysis, and univariate and multivariate Cox regression analyses evaluated the predictive factors of IBrC patients. The investigation of the potential prognostic model’s accuracy was analyzed using the ROC curve. Results: Patients with tumor size < 2 cm, Ki67 expression < 20%, luminal-A molecular subtype, and extra-dose brachytherapy boost administration displayed the most favorable prognosis according to breast cancer disease-free survival and overall survival. The estimated 5 year probability of DFS and OS rates in women with tumor diameter < 2 cm were 89% and 90%, respectively. In tumor diameter > 2 cm, the estimated 5 year probability of DFS was 73% and OS was 76%. Interestingly, the tumor diameter of 1.6 cm with a specificity of 60.5% and a sensitivity of 75% occurred as the best threshold point to differentiate patients with cancer recurrence from those without cancer progression. Conclusions: Our study provides essential information on the clinicopathological profile and future outcomes of early stage IBrC patients. Furthermore, the tumor diameter cut-off value of 1.6 cm discriminating between disease recurrence and those without disease progression patients represents an innovative direction for further research. Full article
(This article belongs to the Special Issue Breast Cancer: Clinical Diagnosis and Personalized Therapy)
Show Figures

Figure 1

16 pages, 5236 KiB  
Article
Hash Encoding and Brightness Correction in 3D Industrial and Environmental Reconstruction of Tidal Flat Neural Radiation
by Huilin Ge, Biao Wang, Zhiyu Zhu, Jin Zhu and Nan Zhou
Sensors 2024, 24(5), 1451; https://doi.org/10.3390/s24051451 - 23 Feb 2024
Cited by 1 | Viewed by 1498
Abstract
We present an innovative approach to mitigating brightness variations in the unmanned aerial vehicle (UAV)-based 3D reconstruction of tidal flat environments, emphasizing industrial applications. Our work focuses on enhancing the accuracy and efficiency of neural radiance fields (NeRF) for 3D scene synthesis. We [...] Read more.
We present an innovative approach to mitigating brightness variations in the unmanned aerial vehicle (UAV)-based 3D reconstruction of tidal flat environments, emphasizing industrial applications. Our work focuses on enhancing the accuracy and efficiency of neural radiance fields (NeRF) for 3D scene synthesis. We introduce a novel luminance correction technique to address challenging illumination conditions, employing a convolutional neural network (CNN) for image enhancement in cases of overexposure and underexposure. Additionally, we propose a hash encoding method to optimize the spatial position encoding efficiency of NeRF. The efficacy of our method is validated using diverse datasets, including a custom tidal flat dataset and the Mip-NeRF 360 dataset, demonstrating superior performance across various lighting scenarios. Full article
Show Figures

Figure 1

16 pages, 9434 KiB  
Article
Omnidirectional-Sensor-System-Based Texture Noise Correction in Large-Scale 3D Reconstruction
by Wenya Xie and Xiaoping Hong
Sensors 2024, 24(1), 78; https://doi.org/10.3390/s24010078 - 22 Dec 2023
Viewed by 1457
Abstract
The evolution of cameras and LiDAR has propelled the techniques and applications of three-dimensional (3D) reconstruction. However, due to inherent sensor limitations and environmental interference, the reconstruction process often entails significant texture noise, such as specular highlight, color inconsistency, and object occlusion. Traditional [...] Read more.
The evolution of cameras and LiDAR has propelled the techniques and applications of three-dimensional (3D) reconstruction. However, due to inherent sensor limitations and environmental interference, the reconstruction process often entails significant texture noise, such as specular highlight, color inconsistency, and object occlusion. Traditional methodologies grapple to mitigate such noise, particularly in large-scale scenes, due to the voluminous data produced by imaging sensors. In response, this paper introduces an omnidirectional-sensor-system-based texture noise correction framework for large-scale scenes, which consists of three parts. Initially, we obtain a colored point cloud with luminance value through LiDAR points and RGB images organization. Next, we apply a voxel hashing algorithm during the geometry reconstruction to accelerate the computation speed and save the computer memory. Finally, we propose the key innovation of our paper, the frame-voting rendering and the neighbor-aided rendering mechanisms, which effectively eliminates the aforementioned texture noise. From the experimental results, the processing rate of one million points per second shows its real-time applicability, and the output figures of texture optimization exhibit a significant reduction in texture noise. These results indicate that our framework has advanced performance in correcting multiple texture noise in large-scale 3D reconstruction. Full article
(This article belongs to the Special Issue Sensing and Processing for 3D Computer Vision: 2nd Edition)
Show Figures

Figure 1

12 pages, 2457 KiB  
Article
An Intra-Individual Comparison of Low-keV Photon-Counting CT versus Energy-Integrating-Detector CT Angiography of the Aorta
by Jan-Lucca Hennes, Henner Huflage, Jan-Peter Grunz, Viktor Hartung, Anne Marie Augustin, Theresa Sophie Patzer, Pauline Pannenbecker, Bernhard Petritsch, Thorsten Alexander Bley and Philipp Gruschwitz
Diagnostics 2023, 13(24), 3645; https://doi.org/10.3390/diagnostics13243645 - 12 Dec 2023
Cited by 5 | Viewed by 1380
Abstract
This retrospective study aims to provide an intra-individual comparison of aortic CT angiographies (CTAs) using first-generation photon-counting-detector CT (PCD-CT) and third-generation energy-integrating-detector CT (EID-CT). High-pitch CTAs were performed with both scanners and equal contrast-agent protocols. EID-CT employed automatic tube voltage selection (90/100 kVp) [...] Read more.
This retrospective study aims to provide an intra-individual comparison of aortic CT angiographies (CTAs) using first-generation photon-counting-detector CT (PCD-CT) and third-generation energy-integrating-detector CT (EID-CT). High-pitch CTAs were performed with both scanners and equal contrast-agent protocols. EID-CT employed automatic tube voltage selection (90/100 kVp) with reference tube current of 434/350 mAs, whereas multi-energy PCD-CT scans were generated with fixed tube voltage (120 kVp), image quality level of 64, and reconstructed as 55 keV monoenergetic images. For image quality assessment, contrast-to-noise ratios (CNRs) were calculated, and subjective evaluation (overall quality, luminal contrast, vessel sharpness, blooming, and beam hardening) was performed independently by three radiologists. Fifty-seven patients (12 women, 45 men) were included with a median interval between examinations of 12.7 months (interquartile range 11.1 months). Using manufacturer-recommended scan protocols resulted in a substantially lower radiation dose in PCD-CT (size-specific dose estimate: 4.88 ± 0.48 versus 6.28 ± 0.50 mGy, p < 0.001), while CNR was approximately 50% higher (41.11 ± 8.68 versus 27.05 ± 6.73, p < 0.001). Overall image quality and luminal contrast were deemed superior in PCD-CT (p < 0.001). Notably, EID-CT allowed for comparable vessel sharpness (p = 0.439) and less pronounced blooming and beam hardening (p < 0.001). Inter-rater agreement was good to excellent (0.58–0.87). Concluding, aortic PCD-CTAs facilitate increased image quality with significantly lower radiation dose compared to EID-CTAs. Full article
Show Figures

Figure 1

13 pages, 4137 KiB  
Article
Numerical Simulation on Corneal Surface Behavior Applying Luminous Beam Levels
by Fernando Guevara-Leon, Mario Alberto Grave-Capistrán, Juan Alejandro Flores-Campos, Jose Luis Torres-Ariza, Elliot Alonso Alcántara-Arreola and Christopher René Torres-SanMiguel
Appl. Sci. 2023, 13(22), 12132; https://doi.org/10.3390/app132212132 - 8 Nov 2023
Cited by 1 | Viewed by 1656
Abstract
According to the World Health Organization (WHO), approximately 1.3 billion people experience visual impairments. Daily exposure to various levels of luminous beams directly impacts the front layer of the visible structure, leading to corneal injuries. To comprehensively understand this, we reconstructed a three-dimensional [...] Read more.
According to the World Health Organization (WHO), approximately 1.3 billion people experience visual impairments. Daily exposure to various levels of luminous beams directly impacts the front layer of the visible structure, leading to corneal injuries. To comprehensively understand this, we reconstructed a three-dimensional model utilizing the PENTACAM® system. This enabled us to accurately determine the 50th percentile dimensions of the fibrous layer of the eyeball. Using the Ogden mathematical model, we developed a 3D cornea model, treating it as a soft tissue with predictable behavior, considering mechanical properties such as viscoelasticity, anisotropy, and nonlinearity. Employing the Finite Element Method (FEM), we analyzed five distinct test scenarios to explore the structural response of the cornea. Luminous beam properties were instrumental in establishing varying mechanical loads, leading to structural deformations on the corneal surface. Our findings reveal that when a smartphone’s screen emits light at a frequency of 651.72 THz from 200 mm, displacements in the corneal layer can reach up to 9.07 µm. The total load, computed by the number of photons, amounts to 7172.637 Pa. Full article
(This article belongs to the Special Issue Recent Advances in Pathogenesis and Management of Eye Diseases)
Show Figures

Figure 1

Back to TopTop