Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,289)

Search Parameters:
Keywords = image quality comparison

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 14250 KB  
Article
AI-Based 3D Modeling Strategies for Civil Infrastructure: Quantitative Assessment of NeRF and Photogrammetry
by Edison Atencio, Fabrizzio Duarte, Fidel Lozano-Galant, Rocio Porras and Ye Xia
Sensors 2026, 26(3), 852; https://doi.org/10.3390/s26030852 - 28 Jan 2026
Abstract
Three-dimensional (3D) modeling technologies are increasingly vital in civil engineering, providing precise digital representations of infrastructure for analysis, supervision, and planning. This study presents a comparative assessment of Neural Radiance Fields (NeRFs) and digital photogrammetry using a real-world case study involving a terrace [...] Read more.
Three-dimensional (3D) modeling technologies are increasingly vital in civil engineering, providing precise digital representations of infrastructure for analysis, supervision, and planning. This study presents a comparative assessment of Neural Radiance Fields (NeRFs) and digital photogrammetry using a real-world case study involving a terrace at the Civil Engineering School of the Pontificia Universidad Católica de Valparaíso. The comparison is motivated by the operational complexity of image acquisition campaigns, where large image datasets increase flight time, fieldwork effort, and survey costs. Both techniques were evaluated across varying levels of data availability to analyze reconstruction behavior under progressively constrained image acquisition conditions, rather than to propose new algorithms. NeRF and photogrammetry were compared based on visual quality, point cloud density, geometric accuracy, and processing time. Results indicate that NeRF delivers fast, photorealistic outputs even with reduced image input, enabling efficient coverage with fewer images, while photogrammetry remains superior in metric accuracy and structural completeness. The study concludes by proposing an application-oriented evaluation framework and potential hybrid workflows to guide the selection of 3D modeling technologies based on specific engineering objectives, survey design constraints, and resource availability while also highlighting how AI-based reconstruction methods can support emerging digital workflows in infrastructure monitoring under variable or limited data conditions. Full article
(This article belongs to the Special Issue AI-Enabled Smart Sensors for Industry Monitoring and Fault Diagnosis)
Show Figures

Figure 1

19 pages, 1724 KB  
Article
Speech Impairment in Early Parkinson’s Disease Is Associated with Nigrostriatal Dopaminergic Dysfunction
by Sotirios Polychronis, Grigorios Nasios, Efthimios Dardiotis, Rayo Akande and Gennaro Pagano
J. Clin. Med. 2026, 15(3), 1006; https://doi.org/10.3390/jcm15031006 - 27 Jan 2026
Abstract
Background/Objectives: Speech difficulties are an early and disabling manifestation of Parkinson’s disease (PD), affecting communication and quality of life. This study aimed to examine demographic, clinical, dopaminergic imaging and cerebrospinal fluid (CSF) correlates of speech difficulties in early PD, comparing treatment-naïve and levodopa-treated [...] Read more.
Background/Objectives: Speech difficulties are an early and disabling manifestation of Parkinson’s disease (PD), affecting communication and quality of life. This study aimed to examine demographic, clinical, dopaminergic imaging and cerebrospinal fluid (CSF) correlates of speech difficulties in early PD, comparing treatment-naïve and levodopa-treated patients. Methods: A cross-sectional analysis was conducted using data from the Parkinson’s Progression Markers Initiative (PPMI). The sample included 376 treatment-naïve and 133 levodopa-treated early PD participants. Speech difficulties were defined by Movement Disorder Society—Unified Parkinson’s Disease Rating Scale (MDS-UPDRS) Part III, with Item 3.1 ≥ 1. Group comparisons and binary logistic regression identified predictors among demographic, clinical, dopaminergic and CSF biomarker variables, including [123I]FP-CIT specific binding ratios (SBRs). All analyses were cross-sectional, and findings reflect associative relationships rather than treatment effects or causal mechanisms. Results: Speech difficulties were present in 44% of treatment-naïve and 57% of levodopa-treated participants. In both cohorts, higher MDS-UPDRS Part III ON scores—reflecting greater motor severity—and lower mean putamen SBR values were significant independent predictors of speech impairment. Age was an additional predictor in the treatment-naïve group. No significant differences were found in CSF biomarkers (α-synuclein, amyloid-β, tau, phosphorylated tau). These findings indicate that striatal dopaminergic loss, particularly in the putamen, and motor dysfunction relate to early PD-related speech difficulties, whereas CSF neurodegeneration markers do not differentiate affected patients. Conclusions: Speech difficulties in early PD are primarily linked to dopaminergic and motor dysfunction rather than global neurodegenerative biomarker changes. Longitudinal and multimodal studies integrating acoustic, neuroimaging, and cognitive measures are warranted to elucidate the neural basis of speech decline and inform targeted interventions. Full article
(This article belongs to the Special Issue Innovations in Parkinson’s Disease)
Show Figures

Figure 1

19 pages, 1413 KB  
Article
Interpreting Modulation Transfer Function in Endoscopic Imaging: Spatial-Frequency Conversion Across Imaging Spaces and the Digital Image Domain with Case Studies
by Quanzeng Wang
Sensors 2026, 26(3), 827; https://doi.org/10.3390/s26030827 - 27 Jan 2026
Abstract
Endoscopes are widely used in medicine, making objective evaluation of imaging performance essential for device development and quality assurance. Image resolution is commonly characterized by the modulation transfer function (MTF); however, its interpretation depends critically on how spatial frequency is defined and reported. [...] Read more.
Endoscopes are widely used in medicine, making objective evaluation of imaging performance essential for device development and quality assurance. Image resolution is commonly characterized by the modulation transfer function (MTF); however, its interpretation depends critically on how spatial frequency is defined and reported. Because spatial frequency is directly tied to sampling, it can be expressed in different units across the imaging chain, including the object plane, image sensor plane, and digital image domain. Inconsistent conversion between these spaces and domains can mislead comparisons and even alter the apparent ranking of regions of interest (ROIs) or imaging systems. This work presents a systematic analysis of spatial-frequency relationships along the endoscopic imaging chain and provides a practical conversion and interpretation workflow for MTF analysis. The framework accounts for sensor sampling, in-camera processing, resampling or scaling, and geometric distortion. Because geometric distortion introduces position-dependent sampling across the field of view, ROI-specific local-magnification measurements are incorporated to convert measured MTFs to a consistent object space spatial-frequency axis. Two case studies illustrate the implications. First, an off-axis ROI may appear to outperform the image center when MTF is expressed in digital image domain cycles per pixel, but this conclusion reverses after conversion to object space cycles per millimeter using local magnification. Second, resampled image outputs can yield inflated MTF curves unless scaling differences between formats are explicitly incorporated into the spatial-frequency axis. Overall, the proposed conversion and reporting workflow enables consistent and physically meaningful MTF comparison across devices, ROIs, and acquisition configurations when geometric distortion, sampling, or resampling differs, clarifying how optics, sensor characteristics, and image processing jointly determine reported MTF results. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

23 pages, 5057 KB  
Article
DropSense: A Novel Imaging Software for the Analysis of Spray Parameters on Water-Sensitive Papers
by Ömer Barış Özlüoymak, Medet İtmeç and Alper Soysal
Appl. Sci. 2026, 16(3), 1197; https://doi.org/10.3390/app16031197 - 23 Jan 2026
Viewed by 124
Abstract
Measuring the spray parameters and providing feedback on the quality of the spraying is critical to ensuring that the spraying material reaches to the appropriate region. A novel software entitled DropSense was developed to determine spray parameters quickly and accurately compared to DepositScan, [...] Read more.
Measuring the spray parameters and providing feedback on the quality of the spraying is critical to ensuring that the spraying material reaches to the appropriate region. A novel software entitled DropSense was developed to determine spray parameters quickly and accurately compared to DepositScan, ImageJ 1.54d and Image-Pro 10 software. Water-sensitive papers (WSP) were used to determine spray parameters such as deposit coverage, total deposits counted, DV10, DV50, DV90, density, deposit area and relative span values. Upon execution of the developed software, these parameters were displayed on the computer screen and then saved in an Excel spreadsheet file at the end of the image analysis. A conveyor belt system with three different belt speeds (4, 5 and 6 km h−1) and four nozzle types (AI11002, TXR8002, XR11002, TTJ6011002) were used for carrying out the spray experiments. The novel software was developed in the LabVIEW programming language. Compared WSP image results related to the mentioned spray parameters were statistically evaluated. The results showed that the DropSense software had superior speed and ease of use in comparison to the other software for the image analysis of WSPs. The novel software showed mostly similar or more reliable performance compared to the existing software. The core technical innovation of DropSense lay in its integration of advanced morphological operations, which enable the accurate separation and quantification of overlapping droplet stains on WSPs. In addition, it performed fully automated processing of WSP images and significantly reduced analysis time compared to commonly used WSP image analysis software. Full article
Show Figures

Figure 1

14 pages, 2173 KB  
Article
Exploring the Role of Skull Base Anatomy in Surgical Approach Selection and Endocrinological Outcomes in Craniopharyngiomas
by Alessandro Tozzi, Giorgio Fiore, Elisa Sala, Giulio Andrea Bertani, Stefano Borsa, Ilaria Carnicelli, Emanuele Ferrante, Giulia Platania, Giovanna Mantovani and Marco Locatelli
J. Clin. Med. 2026, 15(2), 896; https://doi.org/10.3390/jcm15020896 - 22 Jan 2026
Viewed by 40
Abstract
Background/Objectives: Craniopharyngiomas (CPs) are rare, generally benign tumors predominantly located in the sellar and suprasellar regions, associated with significant morbidity and complex surgical management. Despite high overall survival rates, patients frequently experience complications including visual impairment, pituitary dysfunction, diabetes insipidus (DI), and [...] Read more.
Background/Objectives: Craniopharyngiomas (CPs) are rare, generally benign tumors predominantly located in the sellar and suprasellar regions, associated with significant morbidity and complex surgical management. Despite high overall survival rates, patients frequently experience complications including visual impairment, pituitary dysfunction, diabetes insipidus (DI), and hypothalamic syndrome. Among these, hypothalamic obesity (HO) represents one of the most clinically challenging sequelae, often occurring early, lacking standardized medical treatment, and leading to substantial comorbidity and reduced quality of life. This study reports a single-center experience focusing on the relationship between skull base anatomy, surgical approach selection, and endocrinological outcomes. Methods: A retrospective analysis was conducted on patients diagnosed with CPs who underwent surgery by a dedicated team at our Department from January 2014 to January 2024. The approaches used were endoscopic (ER) and transcranial (TR). Preoperative imaging (volumetric MRI and CT scans) was analyzed using 3DSlicer (open-source software) for anatomical modeling of the tumor and skull base. Clinical outcomes were evaluated through follow-up assessments by a team of neuroendocrinologists. Data on BMI changes, DI onset, and hypopituitarism were collected. Statistical analyses consisted of descriptive comparisons and exploratory regression models. Results: Of 18 patients reviewed, 14 met the inclusion criteria. Larger sphenoid sinus volumes were associated with selection of an endoscopic endonasal approach (p = 0.0351; AUC = 0.875). In ER cases, the osteotomy area was directly related to tumor volume, independent of other anatomical parameters. Postoperatively, a significant increase in BMI (22.39 vs. 26.65 kg/m2; p = 0.0049) and in the incidence of DI (three vs. nine cases; p-value 0.0272) was observed. No clear differential association between surgical approach and endocrinological outcomes emerged in this cohort. Conclusions: Quantitative assessment of skull base anatomy using 3D modeling may support surgical approach selection in patients with craniopharyngiomas, particularly in identifying anatomical settings favorable to endoscopic endonasal surgery. Endocrinological outcomes appeared more closely related to tumor characteristics and hypothalamic involvement than to the surgical route itself. These findings support the role of individualized, anatomy-informed surgical planning within a multidisciplinary framework. Full article
(This article belongs to the Section Endocrinology & Metabolism)
Show Figures

Figure 1

19 pages, 10545 KB  
Article
Comparative Analysis of Deep Learning Architectures for Automatic Tooth Segmentation in Panoramic Dental Radiographs: Balancing Accuracy and Computational Efficiency
by Alperen Yalım, Emre Aytugar, Fahrettin Kalabalık and İsmail Akdağ
Diagnostics 2026, 16(2), 336; https://doi.org/10.3390/diagnostics16020336 - 20 Jan 2026
Viewed by 140
Abstract
Background/Objectives: This study provides a systematic benchmark of U-Net–based deep learning models for automatic tooth segmentation in panoramic dental radiographs, with a specific focus on how segmentation accuracy changes as computational cost increases across different encoder backbones. Methods: U-Net models with ResNet, EfficientNet, [...] Read more.
Background/Objectives: This study provides a systematic benchmark of U-Net–based deep learning models for automatic tooth segmentation in panoramic dental radiographs, with a specific focus on how segmentation accuracy changes as computational cost increases across different encoder backbones. Methods: U-Net models with ResNet, EfficientNet, DenseNet, and MobileNetV3-Small encoder families pretrained on ImageNet were evaluated on the publicly available Tufts Dental Database (1000 panoramic radiographs) using a five-fold cross-validation strategy. Segmentation performance was quantified using the Dice coefficient and Intersection over Union (IoU), while computational efficiency was characterized by parameter count and floating-point operations reported as GFLOPs per image. Statistical comparisons were conducted using the Friedman test followed by Nemenyi-corrected post hoc analyses (p<0.05). Results: The overall segmentation quality was consistently high, clustering within a narrow range (Dice: 0.9168–0.9259). This suggests diminishing returns as the backbone complexity increases. EfficientNet-B7 achieved the highest nominal accuracy (Dice: 0.9259 ± 0.0007; IoU: 0.8621 ± 0.0013); however, the differences in Dice score between EfficientNet-B0, B4 and B7 were not statistically significant (p>0.05). In contrast, computational demands varied substantially (2.9–67.2 million parameters; 4.93–40.8 GFLOPs). EfficientNet-B0 provided an accurate and efficient operating point (Dice: 0.9244 ± 0.0011) at low computational cost (5.98 GFLOPs). In contrast, MobileNetV3-Small offered the lowest computational cost (4.93 GFLOPs; 2.9 million parameters), but also the lowest Dice score (0.9168 ± 0.0031). Compared with heavier ResNet and DenseNet variants, EfficientNet-B0 achieved competitive accuracy with a markedly lower computational footprint. Conclusions: The findings show that larger models do not always perform better and that models with increased performance may not necessarily yield meaningful gains. It should be noted that the findings are limited to the task of tooth segmentation; different findings may be obtained for different tasks. Among the models evaluated for tooth segmentation, EfficientNet-B0 stands out as the most practical option, maintaining near-saturated accuracy levels while keeping model size and computational cost low. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

15 pages, 1045 KB  
Systematic Review
AI at the Bedside of Psychiatry: Comparative Meta-Analysis of Imaging vs. Non-Imaging Models for Bipolar vs. Unipolar Depression
by Andrei Daescu, Ana-Maria Cristina Daescu, Alexandru-Ioan Gaitoane, Ștefan Maxim, Silviu Alexandru Pera and Liana Dehelean
J. Clin. Med. 2026, 15(2), 834; https://doi.org/10.3390/jcm15020834 - 20 Jan 2026
Viewed by 133
Abstract
Background: Differentiating bipolar disorder (BD) from unipolar major depressive disorder (MDD) at first episode is clinically consequential but challenging. Artificial intelligence/machine learning (AI/ML) may improve early diagnostic accuracy across imaging and non-imaging data sources. Methods: Following PRISMA 2020 and a pre-registered [...] Read more.
Background: Differentiating bipolar disorder (BD) from unipolar major depressive disorder (MDD) at first episode is clinically consequential but challenging. Artificial intelligence/machine learning (AI/ML) may improve early diagnostic accuracy across imaging and non-imaging data sources. Methods: Following PRISMA 2020 and a pre-registered protocol on protocols.io, we searched PubMed, Scopus, Europe PMC, Semantic Scholar, OpenAlex, The Lens, medRxiv, ClinicalTrials.gov, and Web of Science (2014–8 October 2025). Eligible studies developed/evaluated supervised ML classifiers for BD vs. MDD at first episode and reported test-set discrimination. AUCs were meta-analyzed on the logit (GEN) scale using random effects (REML) with Hartung–Knapp adjustment and then back-transformed. Subgroup (imaging vs. non-imaging), leave-one-out (LOO), and quality sensitivity (excluding high risk of leakage) analyses were prespecified. Risk of bias used QUADAS-2 with PROBAST/AI considerations. Results: Of 158 records, 39 duplicates were removed and 119 records screened; 17 met qualitative criteria; and 6 had sufficient data for meta-analysis. The pooled random-effects AUC was 0.84 (95% CI 0.75–0.90), indicating above-chance discrimination, with substantial heterogeneity (I2 = 86.5%). Results were robust to LOO, exclusion of two high-risk-of-leakage studies (pooled AUC 0.83, 95% CI 0.72–0.90), and restriction to higher-rigor validation (AUC 0.83, 95% CI 0.69–0.92). Non-imaging models showed higher point estimates than imaging models; however, subgroup comparisons were exploratory due to the small number of studies: pooled AUC ≈ 0.90–0.92 with I2 = 0% vs. 0.79 with I2 = 64%; test for subgroup difference Q = 7.27, df = 1, p = 0.007. Funnel plot inspection and Egger/Begg tests found that we could not reliably assess small-study effects/publication bias due to the small number of studies. Conclusions: AI/ML models provide good and robust discrimination of BD vs. MDD at first episode. Non-imaging approaches are promising due to higher point estimates in the available studies and practical scalability, but prospective evaluation is needed and conclusions about modality superiority remain tentative given the small number of non-imaging studies (k = 2). Full article
(This article belongs to the Special Issue How Clinicians See the Use of AI in Psychiatry)
Show Figures

Figure 1

20 pages, 8055 KB  
Article
Research on an Underwater Visual Enhancement Method Based on Adaptive Parameter Optimization in a Multi-Operator Framework
by Zhiyong Yang, Shengze Yang, Yuxuan Fu and Hao Jiang
Sensors 2026, 26(2), 668; https://doi.org/10.3390/s26020668 - 19 Jan 2026
Viewed by 165
Abstract
Underwater images often suffer from luminance attenuation, structural degradation, and color distortion due to light absorption and scattering in water. The variations in illumination and color distribution across different water bodies further increase the uncertainty of these degradations, making traditional enhancement methods that [...] Read more.
Underwater images often suffer from luminance attenuation, structural degradation, and color distortion due to light absorption and scattering in water. The variations in illumination and color distribution across different water bodies further increase the uncertainty of these degradations, making traditional enhancement methods that rely on fixed parameters, such as underwater dark channel prior (UDCP) and histogram equalization (HE), unstable in such scenarios. To address these challenges, this paper proposes a multi-operator underwater image enhancement framework with adaptive parameter optimization. To achieve luminance compensation, structural detail enhancement, and color restoration, a collaborative enhancement pipeline was constructed using contrast-limited adaptive histogram equalization (CLAHE) with highlight protection, texture-gated and threshold-constrained unsharp masking (USM), and mild saturation compensation. Building upon this pipeline, an adaptive multi-operator parameter optimization strategy was developed, where a unified scoring function jointly considers feature gains, geometric consistency of feature matches, image quality metrics, and latency constraints to dynamically adjust the CLAHE clip limit, USM gain, and Gaussian scale under varying water conditions. Subjective visual comparisons and quantitative experiments were conducted on several public underwater datasets. Compared with conventional enhancement methods, the proposed approach achieved superior structural clarity and natural color appearance on the EUVP and UIEB datasets, and obtained higher quality metrics on the RUIE dataset (Average Gradient (AG) = 0.5922, Underwater Image Quality Measure (UIQM) = 2.095). On the UVE38K dataset, the proposed adaptive optimization method improved the oriented FAST and rotated BRIEF (ORB) feature counts by 12.5%, inlier matches by 9.3%, and UIQM by 3.9% over the fixed-parameter baseline, while the adjacent-frame matching visualization and stability metrics such as inlier ratio further verified the geometric consistency and temporal stability of the enhanced features. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

31 pages, 4972 KB  
Article
Minutiae-Free Fingerprint Recognition via Vision Transformers: An Explainable Approach
by Bilgehan Arslan
Appl. Sci. 2026, 16(2), 1009; https://doi.org/10.3390/app16021009 - 19 Jan 2026
Viewed by 236
Abstract
Fingerprint recognition systems have relied on fragile workflows based on minutiae extraction, which suffer from significant performance losses under real-world conditions such as sensor diversity and low image quality. This study introduces a fully minutiae-free fingerprint recognition framework based on self-supervised Vision Transformers. [...] Read more.
Fingerprint recognition systems have relied on fragile workflows based on minutiae extraction, which suffer from significant performance losses under real-world conditions such as sensor diversity and low image quality. This study introduces a fully minutiae-free fingerprint recognition framework based on self-supervised Vision Transformers. A systematic evaluation of multiple DINOv2 model variants is conducted, and the proposed system ultimately adopts the DINOv2-Base Vision Transformer as the primary configuration, as it offers the best generalization performance trade-off under conditions of limited fingerprint data. Larger variants are additionally analyzed to assess scalability and capacity limits. The DINOv2 pretrained network is fine-tuned using self-supervised domain adaptation on 64,801 fingerprint images, eliminating all classical enhancement, binarization, and minutiae extraction steps. Unlike the single-sensor protocols commonly used in the literature, the proposed approach is extensively evaluated in a heterogeneous testbed with a wide range of sensors, qualities, and acquisition methods, including 1631 unique fingers from 12 datasets. The achieved EER of 5.56% under these challenging conditions demonstrates clear cross-sensor superiority over traditional systems such as VeriFinger (26.90%) and SourceAFIS (41.95%) on the same testbed. A systematic comparison of different model capacities shows that moderate-scale ViT models provide optimal generalization under limited-data conditions. Explainability analyses indicate that the attention maps of the model trained without any minutiae information exhibit meaningful overlap with classical structural regions (IoU = 0.41 ± 0.07). Openly sharing the full implementation and evaluation infrastructure makes the study reproducible and provides a standardized benchmark for future research. Full article
Show Figures

Figure 1

34 pages, 2594 KB  
Article
Variational Deep Alliance: A Generative Auto-Encoding Approach to Longitudinal Data Analysis
by Shan Feng, Wenxian Xie and Yufeng Nie
Entropy 2026, 28(1), 113; https://doi.org/10.3390/e28010113 - 18 Jan 2026
Viewed by 107
Abstract
Rapid advancements in the field of deep learning have had a profound impact on a wide range of scientific studies. This paper incorporates the power of deep neural networks to learn complex relationships in longitudinal data. The novel generative approach, Variational Deep Alliance [...] Read more.
Rapid advancements in the field of deep learning have had a profound impact on a wide range of scientific studies. This paper incorporates the power of deep neural networks to learn complex relationships in longitudinal data. The novel generative approach, Variational Deep Alliance (VaDA), is established, where an “alliance” is formed across repeated measurements via the strength of Variational Auto-Encoder. VaDA models the generating process of longitudinal data with a unified and well-structured latent space, allowing outcomes prediction, subjects clustering and representation learning simultaneously. The integrated model can be inferred efficiently within a stochastic Auto-Encoding Variational Bayes framework, which is scalable to large datasets and can accommodate variables of mixed type. Quantitative comparisons to those baseline methods are considered. VaDA shows high robustness and generalization capability across various synthetic scenarios. Moreover, a longitudinal study based on the well-known CelebFaces Attributes dataset is carried out, where we show its usefulness in detecting meaningful latent clusters and generating high-quality face images. Full article
Show Figures

Figure 1

26 pages, 1167 KB  
Review
A Review of Multimodal Sentiment Analysis in Online Public Opinion Monitoring
by Shuxian Liu and Tianyi Li
Informatics 2026, 13(1), 10; https://doi.org/10.3390/informatics13010010 - 14 Jan 2026
Viewed by 390
Abstract
With the rapid development of the Internet, online public opinion monitoring has emerged as a crucial task in the information era. Multimodal sentiment analysis, through the integration of multiple modalities such as text, images, and audio, combined with technologies including natural language processing [...] Read more.
With the rapid development of the Internet, online public opinion monitoring has emerged as a crucial task in the information era. Multimodal sentiment analysis, through the integration of multiple modalities such as text, images, and audio, combined with technologies including natural language processing and computer vision, offers novel technical means for online public opinion monitoring. Nevertheless, current research still faces many challenges, such as the scarcity of high-quality datasets, limited model generalization ability, and difficulties with cross-modal feature fusion. This paper reviews the current research progress of multimodal sentiment analysis in online public opinion monitoring, including its development history, key technologies, and application scenarios. Existing problems are analyzed and future research directions are discussed. In particular, we emphasize a fusion-architecture-centric comparison under online public opinion monitoring, and discuss cross-lingual differences that affect multimodal alignment and evaluation. Full article
Show Figures

Figure 1

13 pages, 4563 KB  
Article
Balancing Radiation Dose and Image Quality: Protocol Optimization for Mobile Head CT in Neurointensive Care Unit Patients
by Damian Mialkowskyj, Robert Stahl, Suzette Heck, Konstantinos Dimitriadis, Thomas David Fischer, Thomas Liebig, Christoph G. Trumm, Tim Wesemann and Robert Forbrig
Diagnostics 2026, 16(2), 256; https://doi.org/10.3390/diagnostics16020256 - 13 Jan 2026
Viewed by 155
Abstract
Objective: Mobile head CT enables bedside neuroimaging in critically ill patients, reducing risks associated with intrahospital transport. Despite increasing clinical use, evidence on dose optimization for mobile CT systems remains limited. This study evaluated whether an optimized CT protocol can reduce radiation exposure [...] Read more.
Objective: Mobile head CT enables bedside neuroimaging in critically ill patients, reducing risks associated with intrahospital transport. Despite increasing clinical use, evidence on dose optimization for mobile CT systems remains limited. This study evaluated whether an optimized CT protocol can reduce radiation exposure without compromising diagnostic image quality in neurointensive care unit patients. Methods: In this retrospective single-center study, twenty-two non-contrast head CT examinations were acquired with a second-generation mobile CT scanner between March and May 2023. Patients underwent either a default (group A, n = 14; volumetric computed tomography dose index (CTDIvol) 44.1 mGy) or low-dose CT protocol (group B, n = 8; CTDIvol 32.1 mGy). Regarding dosimetry analysis, we recorded dose length product (DLP) and effective dose (ED). Quantitative image quality was assessed by manually placing ROIs at the basal ganglia and cerebellar levels to determine signal, noise, signal-to-noise ratio, and contrast-to-noise ratio. Two neuroradiologists independently rated qualitative image quality using a four-point Likert scale. Statistical comparisons were performed using a significance threshold of 0.05. Results: Median DLP and ED were significantly lower for group B (592 mGy·cm, 1.12 mSv) than for group A (826 mGy·cm, 1.57 mSv; each p < 0.0001). Quantitative image quality parameters did not differ significantly between groups (p > 0.05). Qualitative image quality was rated excellent (median score 4). Conclusions: The optimized mobile head CT protocol achieved a 28.7% reduction in radiation exposure while maintaining high diagnostic image quality. These findings support the adoption of low-dose strategies in mobile CT imaging in line with established radiation protection standards. Full article
Show Figures

Figure 1

16 pages, 8228 KB  
Article
A Detection Method for Seeding Temperature in Czochralski Silicon Crystal Growth Based on Multi-Sensor Data Fusion
by Lei Jiang, Tongda Chang and Ding Liu
Sensors 2026, 26(2), 516; https://doi.org/10.3390/s26020516 - 13 Jan 2026
Viewed by 158
Abstract
The Czochralski method is the dominant technique for producing power-electronics-grade silicon crystals. At the beginning of the seeding stage, an excessively high (or low) temperature at the solid–liquid interface can cause the time required for the seed to reach the specified length to [...] Read more.
The Czochralski method is the dominant technique for producing power-electronics-grade silicon crystals. At the beginning of the seeding stage, an excessively high (or low) temperature at the solid–liquid interface can cause the time required for the seed to reach the specified length to be too long (or too short). However, the time taken for the seed to reach a specified length is strictly controlled in semiconductor crystal growth to ensure that the initial temperature is appropriate. An inappropriate initial temperature can adversely affect crystal quality and production yield. Accurately evaluating whether the current temperature is appropriate for seeding is therefore essential. However, the temperature at the solid–liquid interface cannot be directly measured, and the current manual evaluation method mainly relies on a visual inspection of the meniscus. Previous methods for detecting this temperature classified image features, lacking a quantitative assessment of the temperature. To address this challenge, this study proposed using the duration of the seeding stage as the target variable for evaluating the temperature and developed an improved multimodal fusion regression network. Temperature signals collected from a central pyrometer and an auxiliary pyrometer were transformed into time–frequency representations via wavelet transform. Features extracted from the time–frequency diagrams, together with meniscus features, were fused through a two-level mechanism with multimodal feature fusion (MFF) and channel attention (CA), followed by masking using spatial attention (SA). The fused features were then input into a random vector functional link network (RVFLN) to predict the seeding duration, thereby establishing an indirect relationship between multi-sensor data and the seeding temperature achieving a quantification of the temperature that could not be directly measured. Transfer comparison experiments conducted on our dataset verified the effectiveness of the feature extraction strategy and demonstrated the superior detection performance of the proposed model. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

36 pages, 27311 KB  
Article
Multi-Threshold Image Segmentation Based on the Hybrid Strategy Improved Dingo Optimization Algorithm
by Qianqian Zhu, Min Gong, Yijie Wang and Zhengxing Yang
Biomimetics 2026, 11(1), 52; https://doi.org/10.3390/biomimetics11010052 - 8 Jan 2026
Viewed by 258
Abstract
This study proposes a Hybrid Strategy Improved Dingo Optimization Algorithm (HSIDOA), designed to address the limitations of the standard DOA in complex optimization tasks, including its tendency to fall into local optima, slow convergence speed, and inefficient boundary search. The HSIDOA integrates a [...] Read more.
This study proposes a Hybrid Strategy Improved Dingo Optimization Algorithm (HSIDOA), designed to address the limitations of the standard DOA in complex optimization tasks, including its tendency to fall into local optima, slow convergence speed, and inefficient boundary search. The HSIDOA integrates a quadratic interpolation search strategy, a horizontal crossover search strategy, and a centroid-based opposition learning boundary-handling mechanism. By enhancing local exploitation, global exploration, and out-of-bounds correction, the algorithm forms an optimization framework that excels in convergence accuracy, speed, and stability. On the CEC2017 (30-dimensional) and CEC2022 (10/20-dimensional) benchmark suites, the HSIDOA achieves significantly superior performance in terms of average fitness, standard deviation, convergence rate, and Friedman test rankings, outperforming seven mainstream algorithms including MLPSO, MELGWO, MHWOA, ALA, HO, RIME, and DOA. The results demonstrate strong robustness and scalability across different dimensional settings. Furthermore, HSIDOA is applied to multi-level threshold image segmentation, where Otsu’s maximum between-class variance is used as the objective function, and PSNR, SSIM, and FSIM serve as evaluation metrics. Experimental results show that HSIDOA consistently achieves the best segmentation quality across four threshold levels (4, 6, 8, and 10 levels). Its convergence curves exhibit rapid decline and early stabilization, with stability surpassing all comparison algorithms. In summary, HSIDOA delivers comprehensive improvements in global exploration capability, local exploitation precision, convergence speed, and high-dimensional robustness. It provides an efficient, stable, and versatile optimization method suitable for both complex numerical optimization and image segmentation tasks. Full article
(This article belongs to the Special Issue Bio-Inspired Machine Learning and Evolutionary Computing)
Show Figures

Figure 1

14 pages, 588 KB  
Systematic Review
Application of Transthoracic and Endobronchial Elastography—A Systematic Review
by Christian Kildegaard, Rune W. Nielsen, Christian B. Laursen, Ariella Denize Nielsen, Amanda D. Juul, Tai Joon An, Dinesh Addala and Casper Falster
Cancers 2026, 18(2), 190; https://doi.org/10.3390/cancers18020190 - 7 Jan 2026
Viewed by 289
Abstract
Introduction: Ultrasound elastography is increasingly used across medical imaging, yet its role in thoracic disease remains poorly defined. While both transthoracic ultrasonography (TUS) and endobronchial ultrasound (EBUS) offer real-time assessment of pleural and pulmonary structures, the diagnostic and clinical value of elastography in [...] Read more.
Introduction: Ultrasound elastography is increasingly used across medical imaging, yet its role in thoracic disease remains poorly defined. While both transthoracic ultrasonography (TUS) and endobronchial ultrasound (EBUS) offer real-time assessment of pleural and pulmonary structures, the diagnostic and clinical value of elastography in this context remains uncertain. Materials and Method: A systematic search of MEDLINE, EMBASE, and the Cochrane Library was conducted according to PRISMA guidelines (April 2023; updated January 2025). Original studies evaluating transthoracic or endobronchial elastography for pleural or pulmonary conditions were included. Data extraction and quality assessment were performed independently by three reviewers, with QUADAS-2 used to evaluate risk of bias. Results: Thirty studies met inclusion criteria. Twenty-eight evaluated TUS elastography and two examined EBUS. Shear wave elastography was most frequently applied, particularly for differentiating malignant from benign pleural effusion or subpleural lesions. Surface wave elastography demonstrated consistently higher stiffness values in patients with interstitial lung disease compared with healthy controls, correlating with radiological and functional disease severity. Elastography-guided pleural biopsy improved diagnostic yield compared with conventional ultrasound-guided biopsy. Overall, substantial methodological variation existed among scanning techniques, elastography modalities, reporting methods, and diagnostic thresholds, limiting cross-study comparison. Conclusions: Ultrasound elastography shows promise for evaluating pleural effusion and pulmonary lesions, procedural guidance, and interstitial lung disease possibly improving diagnostic possibilities with bedside evaluation and reducing patient exposure to radiation. However, methodological variation and limited high-quality evidence preclude clinical implementation. Standardized acquisition protocols and multicentre validation studies are necessary to define its diagnostic utility in thoracic imaging. Full article
(This article belongs to the Special Issue Application of Ultrasound in Cancer Diagnosis and Treatment)
Show Figures

Figure 1

Back to TopTop