Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (19)

Search Parameters:
Keywords = dual illumination estimation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 62819 KB  
Article
Low-Light Image Dehazing and Enhancement via Multi-Feature Domain Fusion
by Jiaxin Wu, Han Ai, Ping Zhou, Hao Wang, Haifeng Zhang, Gaopeng Zhang and Weining Chen
Remote Sens. 2025, 17(17), 2944; https://doi.org/10.3390/rs17172944 - 25 Aug 2025
Viewed by 176
Abstract
The acquisition of nighttime remote-sensing visible-light images is often accompanied by low-illumination effects and haze interference, resulting in significant image quality degradation and greatly affecting subsequent applications. Existing low-light enhancement and dehazing algorithms can handle each problem individually, but their simple cascade cannot [...] Read more.
The acquisition of nighttime remote-sensing visible-light images is often accompanied by low-illumination effects and haze interference, resulting in significant image quality degradation and greatly affecting subsequent applications. Existing low-light enhancement and dehazing algorithms can handle each problem individually, but their simple cascade cannot effectively address unknown real-world degradations. Therefore, we design a joint processing framework, WFDiff, which fully exploits the advantages of Fourier–wavelet dual-domain features and innovatively integrates the inverse diffusion process through differentiable operators to construct a multi-scale degradation collaborative correction system. Specifically, in the reverse diffusion process, a dual-domain feature interaction module is designed, and the joint probability distribution of the generated image and real data is constrained through differentiable operators: on the one hand, a global frequency-domain prior is established by jointly constraining Fourier amplitude and phase, effectively maintaining the radiometric consistency of the image; on the other hand, wavelets are used to capture high-frequency details and edge structures in the spatial domain to improve the prediction process. On this basis, a cross-overlapping-block adaptive smoothing estimation algorithm is proposed, which achieves dynamic fusion of multi-scale features through a differentiable weighting strategy, effectively solving the problem of restoring images of different sizes and avoiding local inconsistencies. In view of the current lack of remote-sensing data for low-light haze scenarios, we constructed the Hazy-Dark dataset. Physical experiments and ablation experiments show that the proposed method outperforms existing single-task or simple cascade methods in terms of image fidelity, detail recovery capability, and visual naturalness, providing a new paradigm for remote-sensing image processing under coupled degradations. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Figure 1

22 pages, 4524 KB  
Article
RAEM-SLAM: A Robust Adaptive End-to-End Monocular SLAM Framework for AUVs in Underwater Environments
by Yekai Wu, Yongjie Li, Wenda Luo and Xin Ding
Drones 2025, 9(8), 579; https://doi.org/10.3390/drones9080579 - 15 Aug 2025
Viewed by 507
Abstract
Autonomous Underwater Vehicles (AUVs) play a critical role in ocean exploration. However, due to the inherent limitations of most sensors in underwater environments, achieving accurate navigation and localization in complex underwater scenarios remains a significant challenge. While vision-based Simultaneous Localization and Mapping (SLAM) [...] Read more.
Autonomous Underwater Vehicles (AUVs) play a critical role in ocean exploration. However, due to the inherent limitations of most sensors in underwater environments, achieving accurate navigation and localization in complex underwater scenarios remains a significant challenge. While vision-based Simultaneous Localization and Mapping (SLAM) provides a cost-effective alternative for AUV navigation, existing methods are primarily designed for terrestrial applications and struggle to address underwater-specific issues, such as poor illumination, dynamic interference, and sparse features. To tackle these challenges, we propose RAEM-SLAM, a robust adaptive end-to-end monocular SLAM framework for AUVs in underwater environments. Specifically, we propose a Physics-guided Underwater Adaptive Augmentation (PUAA) method that dynamically converts terrestrial scene datasets into physically realistic pseudo-underwater images for the augmentation training of RAEM-SLAM, improving the system’s generalization and adaptability in complex underwater scenes. We also introduce a Residual Semantic–Spatial Attention Module (RSSA), which utilizes a dual-branch attention mechanism to effectively fuse semantic and spatial information. This design enables adaptive enhancement of key feature regions and suppression of noise interference, resulting in more discriminative feature representations. Furthermore, we incorporate a Local–Global Perception Block (LGP), which integrates multi-scale local details with global contextual dependencies to significantly improve AUV pose estimation accuracy in dynamic underwater scenes. Experimental results on real-world underwater datasets demonstrate that RAEM-SLAM outperforms state-of-the-art SLAM approaches in enabling precise and robust navigation for AUVs. Full article
Show Figures

Figure 1

23 pages, 436 KB  
Article
Carbon Reduction Impact of the Digital Economy: Infrastructure Thresholds, Dual Objectives Constraint, and Mechanism Optimization Pathways
by Shan Yan, Wen Zhong and Zhiqing Yan
Sustainability 2025, 17(16), 7277; https://doi.org/10.3390/su17167277 - 12 Aug 2025
Viewed by 247
Abstract
The synergistic advancement of “Digital China” and “Beautiful China” represents a pivotal national strategy for achieving high-quality economic development and a low-carbon transition. To illuminate the intrinsic mechanisms linking the digital economy (DE) to urban carbon emission performance (CEP), this study develops a [...] Read more.
The synergistic advancement of “Digital China” and “Beautiful China” represents a pivotal national strategy for achieving high-quality economic development and a low-carbon transition. To illuminate the intrinsic mechanisms linking the digital economy (DE) to urban carbon emission performance (CEP), this study develops a novel two-sector theoretical framework. Leveraging panel data from 278 Chinese prefecture-level cities (2011–2023), we employ a comprehensive evaluation method to gauge DE development and utilize calibrated nighttime light data with downscaling inversion techniques to estimate city-level CEP. Our empirical analysis integrates static panel fixed effects, panel threshold, and moderating effects models. Key findings reveal that the digital economy demonstrably enhances urban carbon emission performance, although this positive effect exhibits a threshold characteristic linked to the maturity of digital infrastructure; beyond a specific developmental stage, the marginal benefits diminish. Crucially, this enhancement operates primarily through the twin engines of fostering technological innovation and driving industrial structure upgrading, with the former playing a dominant role. The impact of DE on CEP displays significant heterogeneity, proving stronger in northern cities, resource-dependent cities, and those characterized by higher levels of inclusive finance or lower fiscal expenditure intensities. Furthermore, the effectiveness of DE in reducing carbon emissions is dynamically moderated by policy environments: flexible economic growth targets amplify its carbon reduction efficacy, while environmental target constraints, particularly direct binding mandates, exert a more pronounced moderating influence. This research provides crucial theoretical insights and actionable policy pathways for harmonizing the “Dual Carbon” goals with the overarching Digital China strategy. Full article
Show Figures

Figure 1

18 pages, 7213 KB  
Article
DFCNet: Dual-Stage Frequency-Domain Calibration Network for Low-Light Image Enhancement
by Hui Zhou, Jun Li, Yaming Mao, Lu Liu and Yiyang Lu
J. Imaging 2025, 11(8), 253; https://doi.org/10.3390/jimaging11080253 - 28 Jul 2025
Viewed by 350
Abstract
Imaging technologies are widely used in surveillance, medical diagnostics, and other critical applications. However, under low-light conditions, captured images often suffer from insufficient brightness, blurred details, and excessive noise, degrading quality and hindering downstream tasks. Conventional low-light image enhancement (LLIE) methods not only [...] Read more.
Imaging technologies are widely used in surveillance, medical diagnostics, and other critical applications. However, under low-light conditions, captured images often suffer from insufficient brightness, blurred details, and excessive noise, degrading quality and hindering downstream tasks. Conventional low-light image enhancement (LLIE) methods not only require annotated data but also often involve heavy models with high computational costs, making them unsuitable for real-time processing. To tackle these challenges, a lightweight and unsupervised LLIE method utilizing a dual-stage frequency-domain calibration network (DFCNet) is proposed. In the first stage, the input image undergoes the preliminary feature modulation (PFM) module to guide the illumination estimation (IE) module in generating a more accurate illumination map. The final enhanced image is obtained by dividing the input by the estimated illumination map. The second stage is used only during training. It applies a frequency-domain residual calibration (FRC) module to the first-stage output, generating a calibration term that is added to the original input to darken dark regions and brighten bright areas. This updated input is then fed back to the PFM and IE modules for parameter optimization. Extensive experiments on benchmark datasets demonstrate that DFCNet achieves superior performance across multiple image quality metrics while delivering visually clearer and more natural results. Full article
(This article belongs to the Section Image and Video Processing)
Show Figures

Figure 1

19 pages, 2812 KB  
Article
Component Generation Network-Based Image Enhancement Method for External Inspection of Electrical Equipment
by Xiong Liu, Juan Zhang, Qiushi Cui, Yingyue Zhou, Qian Wang, Zining Zhao and Yong Li
Electronics 2025, 14(12), 2419; https://doi.org/10.3390/electronics14122419 - 13 Jun 2025
Viewed by 351
Abstract
For external inspection of electrical equipment, poor lighting conditions often lead to problems such as uneven illumination, insufficient brightness, and detail loss, which directly affect subsequent analysis. To solve this problem, the Retinex image enhancement method based on the Component Generation Network (CGNet) [...] Read more.
For external inspection of electrical equipment, poor lighting conditions often lead to problems such as uneven illumination, insufficient brightness, and detail loss, which directly affect subsequent analysis. To solve this problem, the Retinex image enhancement method based on the Component Generation Network (CGNet) is proposed in this paper. It employs CGNet to accurately estimate and generate the illumination and reflection components of the target image. The CGNet, based on UNet, integrates Residual Branch Dual-convolution blocks (RBDConv) and the Channel Attention Mechanism (CAM) to improve the feature-learning capability. By setting different numbers of network layers, the optimal estimation of the illumination and reflection components is achieved. To obtain the ideal enhancement results, gamma correction is applied to adjust the estimated illumination component, while the HSV transformation model preserves color information. Finally, the effectiveness of the proposed method is verified on a dataset of poorly illuminated images from external inspection of electrical equipment. The results show that this method not only requires no external datasets for training but also improves the detail clarity and color richness of the target image, effectively addressing poor lighting of images in external inspection of electrical equipment. Full article
Show Figures

Figure 1

21 pages, 18640 KB  
Article
High-Precision Pose Measurement of Containers on the Transfer Platform of the Dual-Trolley Quayside Container Crane Based on Machine Vision
by Jiaqi Wang, Mengjie He, Yujie Zhang, Zhiwei Zhang, Octavian Postolache and Chao Mi
Sensors 2025, 25(9), 2760; https://doi.org/10.3390/s25092760 - 27 Apr 2025
Viewed by 677
Abstract
To address the high-precision measurement requirements for container pose on dual-trolley quayside crane-transfer platforms, this paper proposes a machine vision-based measurement method that resolves the challenges of multi-scale lockhole detection and precision demands caused by complex illumination and perspective deformation in port operational [...] Read more.
To address the high-precision measurement requirements for container pose on dual-trolley quayside crane-transfer platforms, this paper proposes a machine vision-based measurement method that resolves the challenges of multi-scale lockhole detection and precision demands caused by complex illumination and perspective deformation in port operational environments. A hardware system comprising fixed cameras and edge computing modules is established, integrated with an adaptive image-enhancement preprocessing algorithm to enhance feature robustness under complex illumination conditions. A multi-scale adaptive frequency object-detection framework is developed based on YOLO11, achieving improved detection accuracy for multi-scale lockhole keypoints in perspective-distortion scenarios (mAP@0.5 reaches 95.1%, 4.7% higher than baseline models) through dynamic balancing of high–low-frequency features and adaptive convolution kernel adjustments. An enhanced EPnP optimization algorithm incorporating lockhole coplanar constraints is proposed, establishing a 2D–3D coordinate transformation model that reduces pose-estimation errors to millimeter level (planar MAE-P = 0.024 m) and sub-angular level (MAE-θ = 0.11°). Experimental results demonstrate that the proposed method outperforms existing solutions in container pose-deviation-detection accuracy, efficiency, and stability, proving to be a feasible measurement approach. Full article
(This article belongs to the Special Issue AI-Based Computer Vision Sensors & Systems)
Show Figures

Figure 1

30 pages, 40714 KB  
Article
Zero-TCE: Zero Reference Tri-Curve Enhancement for Low-Light Images
by Chengkang Yu, Guangliang Han, Mengyang Pan, Xiaotian Wu and Anping Deng
Appl. Sci. 2025, 15(2), 701; https://doi.org/10.3390/app15020701 - 12 Jan 2025
Cited by 1 | Viewed by 1505
Abstract
Addressing the common issues of low brightness, poor contrast, and blurred details in images captured under conditions such as night, backlight, and adverse weather, we propose a zero-reference dual-path network based on multi-scale depth curve estimation for low-light image enhancement. Utilizing a no-reference [...] Read more.
Addressing the common issues of low brightness, poor contrast, and blurred details in images captured under conditions such as night, backlight, and adverse weather, we propose a zero-reference dual-path network based on multi-scale depth curve estimation for low-light image enhancement. Utilizing a no-reference loss function, the enhancement of low-light images is converted into depth curve estimation, with three curves fitted to enhance the dark details of the image: a brightness adjustment curve (LE-curve), a contrast enhancement curve (CE-curve), and a multi-scale feature fusion curve (MF-curve). Initially, we introduce the TCE-L and TCE-C modules to improve image brightness and enhance image contrast, respectively. Subsequently, we design a multi-scale feature fusion (MFF) module that integrates the original and enhanced images at multiple scales in the HSV color space based on the brightness distribution characteristics of low-light images, yielding an optimally enhanced image that avoids overexposure and color distortion. We compare our proposed method against ten other advanced algorithms based on multiple datasets, including LOL, DICM, MEF, NPE, and ExDark, that encompass complex illumination variations. Experimental results demonstrate that the proposed algorithm adapts better to the characteristics of images captured in low-light environments, producing enhanced images with sharp contrast, rich details, and preserved color authenticity, while effectively mitigating the issue of overexposure. Full article
Show Figures

Figure 1

19 pages, 15195 KB  
Article
Color and Luminance Separated Enhancement for Low-Light Images with Brightness Guidance
by Feng Zhang, Xinran Liu, Changxin Gao and Nong Sang
Sensors 2024, 24(9), 2711; https://doi.org/10.3390/s24092711 - 24 Apr 2024
Cited by 2 | Viewed by 2412
Abstract
Existing retinex-based low-light image enhancement strategies focus heavily on crafting complex networks for Retinex decomposition but often result in imprecise estimations. To overcome the limitations of previous methods, we introduce a straightforward yet effective strategy for Retinex decomposition, dividing images into colormaps and [...] Read more.
Existing retinex-based low-light image enhancement strategies focus heavily on crafting complex networks for Retinex decomposition but often result in imprecise estimations. To overcome the limitations of previous methods, we introduce a straightforward yet effective strategy for Retinex decomposition, dividing images into colormaps and graymaps as new estimations for reflectance and illumination maps. The enhancement of these maps is separately conducted using a diffusion model for improved restoration. Furthermore, we address the dual challenge of perturbation removal and brightness adjustment in illumination maps by incorporating brightness guidance. This guidance aids in precisely adjusting the brightness while eliminating disturbances, ensuring a more effective enhancement process. Extensive quantitative and qualitative experimental analyses demonstrate that our proposed method improves the performance by approximately 4.4% on the LOL dataset compared to other state-of-the-art diffusion-based methods, while also validating the model’s generalizability across multiple real-world datasets. Full article
Show Figures

Figure 1

18 pages, 9862 KB  
Article
Investigation of Donor-like State Distributions in Solution-Processed IZO Thin-Film Transistor through Photocurrent Analysis
by Dongwook Kim, Hyeonju Lee, Kadir Ejderha, Youngjun Yun, Jin-Hyuk Bae and Jaehoon Park
Nanomaterials 2023, 13(23), 2986; https://doi.org/10.3390/nano13232986 - 21 Nov 2023
Cited by 1 | Viewed by 1648
Abstract
The density of donor-like state distributions in solution-processed indium–zinc-oxide (IZO) thin-film transistors (TFTs) is thoroughly analyzed using photon energy irradiation. This study focuses on quantitatively calculating the distribution of density of states (DOS) in IZO semiconductors, with a specific emphasis on their variation [...] Read more.
The density of donor-like state distributions in solution-processed indium–zinc-oxide (IZO) thin-film transistors (TFTs) is thoroughly analyzed using photon energy irradiation. This study focuses on quantitatively calculating the distribution of density of states (DOS) in IZO semiconductors, with a specific emphasis on their variation with indium concentration. Two calculation methods, namely photoexcited charge collection spectroscopy (PECCS) and photocurrent-induced DOS spectroscopy (PIDS), are employed to estimate the density of the donor-like states. This dual approach not only ensures the accuracy of the findings but also provides a comprehensive perspective on the properties of semiconductors. The results reveal a consistent characteristic: the Recombination–Generation (R-G) center energy ET, a key aspect of the donor-like state, is acquired at approximately 3.26 eV, irrespective of the In concentration. This finding suggests that weak bonds and oxygen vacancies within the Zn-O bonding structure of IZO semiconductors act as the primary source of R-G centers, contributing to the donor-like state distribution. By highlighting this fundamental aspect of IZO semiconductors, this study enhances our understanding of their charge-transport mechanisms. Moreover, it offers valuable insight for addressing stability issues such as negative bias illumination stress, potentially leading to the improved performance and reliability of solution-processed IZO TFTs. The study contributes to the advancement of displays and technologies by presenting further innovations and applications for evaluating the fundamentals of semiconductors. Full article
Show Figures

Figure 1

17 pages, 2917 KB  
Article
Heart Rate Estimation from Facial Image Sequences of a Dual-Modality RGB-NIR Camera
by Wen-Nung Lie, Dao-Quang Le, Chun-Yu Lai and Yu-Shin Fang
Sensors 2023, 23(13), 6079; https://doi.org/10.3390/s23136079 - 1 Jul 2023
Cited by 9 | Viewed by 4253
Abstract
This paper presents an RGB-NIR (Near Infrared) dual-modality technique to analyze the remote photoplethysmogram (rPPG) signal and hence estimate the heart rate (in beats per minute), from a facial image sequence. Our main innovative contribution is the introduction of several denoising techniques such [...] Read more.
This paper presents an RGB-NIR (Near Infrared) dual-modality technique to analyze the remote photoplethysmogram (rPPG) signal and hence estimate the heart rate (in beats per minute), from a facial image sequence. Our main innovative contribution is the introduction of several denoising techniques such as Modified Amplitude Selective Filtering (MASF), Wavelet Decomposition (WD), and Robust Principal Component Analysis (RPCA), which take advantage of RGB and NIR band characteristics to uncover the rPPG signals effectively through this Independent Component Analysis (ICA)-based algorithm. Two datasets, of which one is the public PURE dataset and the other is the CCUHR dataset built with a popular Intel RealSense D435 RGB-D camera, are adopted in our experiments. Facial video sequences in the two datasets are diverse in nature with normal brightness, under-illumination (i.e., dark), and facial motion. Experimental results show that the proposed method has reached competitive accuracies among the state-of-the-art methods even at a shorter video length. For example, our method achieves MAE = 4.45 bpm (beats per minute) and RMSE = 6.18 bpm for RGB-NIR videos of 10 and 20 s in the CCUHR dataset and MAE = 3.24 bpm and RMSE = 4.1 bpm for RGB videos of 60-s in the PURE dataset. Our system has the advantages of accessible and affordable hardware, simple and fast computations, and wide realistic applications. Full article
(This article belongs to the Special Issue Intelligent Health Monitoring Systems Based on Sensor Processing)
Show Figures

Figure 1

12 pages, 5299 KB  
Article
Detecting Human Falls in Poor Lighting: Object Detection and Tracking Approach for Indoor Safety
by Xing Zi, Kunal Chaturvedi, Ali Braytee, Jun Li and Mukesh Prasad
Electronics 2023, 12(5), 1259; https://doi.org/10.3390/electronics12051259 - 6 Mar 2023
Cited by 21 | Viewed by 5592
Abstract
Falls are one the leading causes of accidental death for all people, but the elderly are at particularly high risk. Falls are severe issue in the care of those elderly people who live alone and have limited access to health aides and skilled [...] Read more.
Falls are one the leading causes of accidental death for all people, but the elderly are at particularly high risk. Falls are severe issue in the care of those elderly people who live alone and have limited access to health aides and skilled nursing care. Conventional vision-based systems for fall detection are prone to failure in conditions with low illumination. Therefore, an automated system that detects falls in low-light conditions has become an urgent need for protecting vulnerable people. This paper proposes a novel vision-based fall detection system that uses object tracking and image enhancement techniques. The proposed approach is divided into two parts. First, the captured frames are optimized using a dual illumination estimation algorithm. Next, a deep-learning-based tracking framework that includes detection by YOLOv7 and tracking by the Deep SORT algorithm is proposed to perform fall detection. On the Le2i fall and UR fall detection (URFD) datasets, we evaluate the proposed method and demonstrate the effectiveness of fall detection in dark night environments with obstacles. Full article
(This article belongs to the Special Issue Deep Learning in Image Processing and Pattern Recognition)
Show Figures

Figure 1

20 pages, 4483 KB  
Article
Adsorption Efficiency and Photocatalytic Activity of Silver Sulfide Nanoparticles Deposited on Carbon Nanotubes
by Gururaj M. Neelgund, Sanjuana Fabiola Aguilar, Erica A. Jimenez and Ram L. Ray
Catalysts 2023, 13(3), 476; https://doi.org/10.3390/catal13030476 - 26 Feb 2023
Cited by 10 | Viewed by 2521
Abstract
A multimode, dual functional nanomaterial, CNTs-Ag2S, comprised of carbon nanotubes (CNTs) and silver sulfide (Ag2S) nanoparticles, was prepared through the facile hydrothermal process. Before the deposition of Ag2S nanoparticles, hydrophobic CNTs were modified to become hydrophilic through [...] Read more.
A multimode, dual functional nanomaterial, CNTs-Ag2S, comprised of carbon nanotubes (CNTs) and silver sulfide (Ag2S) nanoparticles, was prepared through the facile hydrothermal process. Before the deposition of Ag2S nanoparticles, hydrophobic CNTs were modified to become hydrophilic through refluxing with a mixture of concentrated nitric and sulfuric acids. The oxidized CNTs were employed to deposit the Ag2S nanoparticles for their efficient immobilization and homogenous distribution. The CNTs-Ag2S could adsorb toxic Cd(II) and completely degrade the hazardous Alizarin yellow R present in water. The adsorption efficiency of CNTs-Ag2S was evaluated by estimating the Cd(II) adsorption at different concentrations and contact times. The CNTs-Ag2S could adsorb Cd(II) entirely within 80 min of the contact time, while CNTs and Ag2S could not pursue it. The Cd(II) adsorption followed the pseudo-second-order, and chemisorption was the rate-determining step in the adsorption process. The Weber−Morris intraparticle pore diffusion model revealed that intraparticle diffusion was not the sole rate-controlling step in the Cd(II) adsorption. Instead, it was contributed by the boundary layer effect. In addition, CNTs-Ag2S could completely degrade alizarin yellow R in water under the illumination of natural sunlight. The Langmuir-Hinshelwood (L-H) model showed that the degradation of alizarin yellow R proceeded with pseudo-first-order kinetics. Overall, CNTs-Ag2S performed as an efficient adsorbent and a competent photocatalyst. Full article
(This article belongs to the Special Issue Catalytic Processes for Water and Wastewater Treatment)
Show Figures

Figure 1

20 pages, 1001 KB  
Article
An Unsupervised Transfer Learning Framework for Visible-Thermal Pedestrian Detection
by Chengjin Lyu, Patrick Heyer, Bart Goossens and Wilfried Philips
Sensors 2022, 22(12), 4416; https://doi.org/10.3390/s22124416 - 10 Jun 2022
Cited by 7 | Viewed by 3217
Abstract
Dual cameras with visible-thermal multispectral pairs provide both visual and thermal appearance, thereby enabling detecting pedestrians around the clock in various conditions and applications, including autonomous driving and intelligent transportation systems. However, due to the greatly varying real-world scenarios, the performance of a [...] Read more.
Dual cameras with visible-thermal multispectral pairs provide both visual and thermal appearance, thereby enabling detecting pedestrians around the clock in various conditions and applications, including autonomous driving and intelligent transportation systems. However, due to the greatly varying real-world scenarios, the performance of a detector trained on a source dataset might change dramatically when evaluated on another dataset. A large amount of training data is often necessary to guarantee the detection performance in a new scenario. Typically, human annotators need to conduct the data labeling work, which is time-consuming, labor-intensive and unscalable. To overcome the problem, we propose a novel unsupervised transfer learning framework for multispectral pedestrian detection, which adapts a multispectral pedestrian detector to the target domain based on pseudo training labels. In particular, auxiliary detectors are utilized and different label fusion strategies are introduced according to the estimated environmental illumination level. Intermediate domain images are generated by translating the source images to mimic the target ones, acting as a better starting point for the parameter update of the pedestrian detector. The experimental results on the KAIST and FLIR ADAS datasets demonstrate that the proposed method achieves new state-of-the-art performance without any manual training annotations on the target data. Full article
(This article belongs to the Topic Methods for Data Labelling for Intelligent Systems)
Show Figures

Figure 1

25 pages, 4495 KB  
Article
Dual Use of Public and Private Health Care Services in Brazil
by Bianca Silva, Niel Hens, Gustavo Gusso, Susan Lagaert, James Macinko and Sara Willems
Int. J. Environ. Res. Public Health 2022, 19(3), 1829; https://doi.org/10.3390/ijerph19031829 - 6 Feb 2022
Cited by 16 | Viewed by 3988
Abstract
(1) Background: Brazil has a universal public healthcare system, but individuals can still opt to buy private health insurance and/or pay out-of-pocket for healthcare. Past research suggests that Brazilians make combined use of public and private services, possibly causing double costs. This study [...] Read more.
(1) Background: Brazil has a universal public healthcare system, but individuals can still opt to buy private health insurance and/or pay out-of-pocket for healthcare. Past research suggests that Brazilians make combined use of public and private services, possibly causing double costs. This study aims to describe this dual use and assess its relationship with socioeconomic status (SES). (2) Methods: We calculated survey-weighted population estimates and descriptive statistics, and built a survey-weighted logistic regression model to explore the effect of SES on dual use of healthcare, including demographic characteristics and other variables related to healthcare need and use as additional explanatory variables using data from the 2019 Brazilian National Health Survey. (3) Results: An estimated 39,039,016 (n = 46,914; 18.6%) persons sought care in the two weeks before the survey, of which 5,576,216 were dual users (n = 6484; 14.7%). Dual use happened both in the direction of public to private (n = 4628; 67.3%), and of private to public (n = 1855; 32.7%). Higher income had a significant effect on dual use (p < 0.0001), suggesting a dose–response relationship, even after controlling for confounders. Significant effects were also found for region (p < 0.0001) and usual source of care (USC) (p < 0.0001). (4) Conclusion: A large number of Brazilians are seeking care from a source different than their regular system. Higher SES, region, and USC are associated factors, possibly leading to more health inequity. Due to its high prevalence and important implications, more research is warranted to illuminate the main causes of dual use. Full article
(This article belongs to the Special Issue Equity, Access and Use of Health Care Services)
Show Figures

Figure 1

26 pages, 17007 KB  
Article
CellsDeepNet: A Novel Deep Learning-Based Web Application for the Automated Morphometric Analysis of Corneal Endothelial Cells
by Alaa S. Al-Waisy, Abdulrahman Alruban, Shumoos Al-Fahdawi, Rami Qahwaji, Georgios Ponirakis, Rayaz A. Malik, Mazin Abed Mohammed and Seifedine Kadry
Mathematics 2022, 10(3), 320; https://doi.org/10.3390/math10030320 - 20 Jan 2022
Cited by 10 | Viewed by 3163
Abstract
The quantification of corneal endothelial cell (CEC) morphology using manual and semi-automatic software enables an objective assessment of corneal endothelial pathology. However, the procedure is tedious, subjective, and not widely applied in clinical practice. We have developed the CellsDeepNet system to automatically segment [...] Read more.
The quantification of corneal endothelial cell (CEC) morphology using manual and semi-automatic software enables an objective assessment of corneal endothelial pathology. However, the procedure is tedious, subjective, and not widely applied in clinical practice. We have developed the CellsDeepNet system to automatically segment and analyse the CEC morphology. The CellsDeepNet system uses Contrast-Limited Adaptive Histogram Equalization (CLAHE) to improve the contrast of the CEC images and reduce the effects of non-uniform image illumination, 2D Double-Density Dual-Tree Complex Wavelet Transform (2DDD-TCWT) to reduce noise, Butterworth Bandpass filter to enhance the CEC edges, and moving average filter to adjust for brightness level. An improved version of U-Net was used to detect the boundaries of the CECs, regardless of the CEC size. CEC morphology was measured as mean cell density (MCD, cell/mm2), mean cell area (MCA, μm2), mean cell perimeter (MCP, μm), polymegathism (coefficient of CEC size variation), and pleomorphism (percentage of hexagonality coefficient). The CellsDeepNet system correlated highly significantly with the manual estimations for MCD (r = 0.94), MCA (r = 0.99), MCP (r = 0.99), polymegathism (r = 0.92), and pleomorphism (r = 0.86), with p < 0.0001 for all the extracted clinical features. The Bland–Altman plots showed excellent agreement. The percentage difference between the manual and automated estimations was superior for the CellsDeepNet system compared to the CEAS system and other state-of-the-art CEC segmentation systems on three large and challenging corneal endothelium image datasets captured using two different ophthalmic devices. Full article
(This article belongs to the Special Issue Computer Graphics, Image Processing and Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop