Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,712)

Search Parameters:
Keywords = digital feature model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
36 pages, 5884 KB  
Article
Fusing Multi-Source Web Data with an ABC-CNN-GRU-Attention Model for Enhanced Urban Passenger Flow Prediction
by Enqi Luo, Guorui Rao, Shutian Tang, Youxi Luo and Hanfang Li
Appl. Sci. 2026, 16(8), 3730; https://doi.org/10.3390/app16083730 - 10 Apr 2026
Abstract
Against the backdrop of smart cities and digital cultural tourism, the accurate prediction of urban passenger flow is of great significance for public security management and resource allocation. However, existing studies mostly rely on single data sources or only perform a simple concatenation [...] Read more.
Against the backdrop of smart cities and digital cultural tourism, the accurate prediction of urban passenger flow is of great significance for public security management and resource allocation. However, existing studies mostly rely on single data sources or only perform a simple concatenation of multi-source features, lacking systematic indicator system design. Meanwhile, weekly or monthly data are commonly used with coarse temporal granularity, making it difficult to capture short-term fluctuations and lag effects. To overcome these limitations, this paper collects the daily passenger flow data of Hangzhou from 15 March 2024 to 15 March 2025; integrates multi-dimensional factors such as keyword search trends across platforms, holidays and major events, and online public opinion; and constructs three daily characteristic indicators: online search index, humanistic–meteorological index, and textual sentiment index. The data denoising, dimensionality reduction, and sentiment quantification are realized through methods including SSA, PCA, and SnowNLP. On this basis, a hybrid CNN-GRU model integrated with the attention mechanism is proposed. An improved artificial bee colony (ABC) algorithm is adopted for global hyperparameter optimization, and a weighted hybrid loss function (JQHL) is introduced to enhance the model’s adaptability to extreme values. The results show that the ABC-CNN-GRU-Attention model, incorporating multi-dimensional indicators, outperforms traditional methods on evaluation metrics, including MAE, RMSE, MAPE, R2, and RPD, demonstrating a higher prediction accuracy and robustness. Full article
21 pages, 5426 KB  
Article
Deep Learning-Based Recognition and Classification of Jin Cang Embroidery Stitches
by Ke-Ke Sun, Lu-Fei Yang, Zi-Ning Lan and Lu Gao
Mathematics 2026, 14(8), 1259; https://doi.org/10.3390/math14081259 - 10 Apr 2026
Abstract
Jin Cang embroidery, characterized by elaborate metallic threadwork and intricate textural patterns, is an important form of intangible cultural heritage. The digital preservation of Jin Cang embroidery is hindered by the scarcity of specialized datasets and the lack of object detection models that [...] Read more.
Jin Cang embroidery, characterized by elaborate metallic threadwork and intricate textural patterns, is an important form of intangible cultural heritage. The digital preservation of Jin Cang embroidery is hindered by the scarcity of specialized datasets and the lack of object detection models that balance high performance with computational efficiency for edge deployment. To address these challenges, a dedicated dataset comprising 3050 images across eight core stitch categories is introduced as the first dataset of its kind for Jin Cang embroidery. Building upon this foundation, Lite-YOLOv11s, a domain-specific lightweight detection framework, is proposed with MobileNetV4 as its backbone to improve the extraction of high-frequency texture cues associated with metallic threadwork. Experimental results show that Lite-YOLOv11s achieves an mAP@0.5 of 0.951, outperforming the YOLOv11s baseline (0.927) while reducing model parameters by 40% and FLOPs by 46%. EigenCAM visualizations further show that the model can localize discriminative stitch-level features even under complex backgrounds. This work provides an efficient and deployable solution for intelligent embroidery recognition and offers a useful reference for the digital preservation of other fine-grained cultural heritage crafts. Full article
Show Figures

Figure 1

25 pages, 854 KB  
Systematic Review
Hybrid Machine Learning Architectures for Emergency Triage: A Systematic Review of Predictive Performance and the Complexity Gradient
by Junaid Ullah, R. Kanesaraj Ramasamay and Venushini Rajendran
BioMedInformatics 2026, 6(2), 21; https://doi.org/10.3390/biomedinformatics6020021 - 10 Apr 2026
Abstract
Background: Emergency triage systems using machine learning traditionally rely on structured tabular data (vital signs), creating a “contextual blind spot” that ignores diagnostic information embedded in unstructured clinical narratives. Hybrid AI models that fuse tabular and text data may improve predictive discrimination, but [...] Read more.
Background: Emergency triage systems using machine learning traditionally rely on structured tabular data (vital signs), creating a “contextual blind spot” that ignores diagnostic information embedded in unstructured clinical narratives. Hybrid AI models that fuse tabular and text data may improve predictive discrimination, but the magnitude and conditions under which fusion adds value remain unclear. Methods: Five databases (PubMed, Scopus, Web of Science, IEEE Xplore, ACM Digital Library) were searched from 1 January 2015 to 15 December 2025. Eligible studies employed Hybrid AI models integrating structured and unstructured emergency department data with quantitative baseline comparisons. Twenty-five studies (N ≈ 4.8 million encounters) met inclusion criteria. We extracted marginal performance gains (ΔAUC), calibration metrics, and demographic reporting. Synthesis followed SWiM principles with subgroup meta-regression testing our novel “Complexity Gradient” hypothesis. Results: Hybrid models demonstrated superior discrimination compared to tabular baselines, with effect magnitude dependent on clinical task complexity. Low-complexity tasks (tachycardia prediction) showed minimal gains (median ΔAUC + 0.036, IQR: 0.02–0.05), while high-complexity tasks (hypoxia, sepsis) demonstrated substantial improvement (median ΔAUC + 0.111, IQR: 0.09–0.13). Meta-regression confirmed complexity significantly moderated effect size (R2 = 0.42, p = 0.003). Only 12% (3/25) of studies reported calibration metrics (Brier scores: 0.089–0.142). Zero studies stratified performance by race/ethnicity; 88% (22/25) failed to report training data demographics. Discussion: The complexity gradient framework explains when multimodal fusion adds predictive value: tasks where diagnostic signal resides in narrative features (temporality, negation) rather than physiological measurements. However, systematic absence of calibration reporting and fairness auditing prevents clinical deployment. Seventy-two percent of studies had high risk of bias in the analysis domain due to retrospective designs without temporal validation. Conclusions: Hybrid triage models show promise for complex diagnostic tasks but require mandatory calibration reporting and demographic performance stratification before clinical implementation. We propose minimum reporting standards including Brier scores, race-stratified metrics, and temporal validation protocols. Full article
Show Figures

Figure 1

25 pages, 5394 KB  
Article
Towards the Development of Multiscale Digital Twins for Fiber-Reinforced Composite Materials Using Machine Learning
by Brandon L. Hearley, Evan J. Pineda, Brett A. Bednarcyk, Joseph R. Baker and Laura G. Wilson
Appl. Sci. 2026, 16(8), 3666; https://doi.org/10.3390/app16083666 - 9 Apr 2026
Abstract
Material considerations are often neglected when developing digital twins, particularly at the relevant length scales that drive material and structural performance. For reinforced composite materials, the microscale has the largest impact on nonlinear material behavior and progressive damage, and thus accurately representing the [...] Read more.
Material considerations are often neglected when developing digital twins, particularly at the relevant length scales that drive material and structural performance. For reinforced composite materials, the microscale has the largest impact on nonlinear material behavior and progressive damage, and thus accurately representing the disordered microstructure of a composite due to processing and manufacturing is critical to developing the material digital twin in the multiscale hierarchy. Automating microstructure characterization is typically done by either training convolutional neural network models using a pretrained encoder or using prompt-based segmentation tools. In this work, a toolset for developing segmentation models is presented, combining these two methods to enable rapid annotation, training, and deployment of microscopy segmentation models for automated material digital twin development without user knowledge of machine learning. Additionally, a Bayesian optimization framework is developed for generating statistically equivalent representative volume elements (SRVE) to a segmented microstructure using a random microstructure generator that implements soft body dynamics. Progressive failure analysis of random, statistically equivalent, and ordered microstructures is compared to the segmented microstructure subject to transverse loading to demonstrate the importance of accurately representing the driving material length scale of a composite digital twin. Ordered microstructures over-predicted crack initiation and ultimate strength and strain. Random and optimized RVE microstructures better agreed with the segmented simulation results, with no significant difference observed between the two methodologies. The improvement in predicted macroscale behavior for models that capture disordered microstructures due to manufacturing processes demonstrates the importance of capturing microstructure features in composites modeling and indicates that SRVEs that capture microstructural features of the physical material can be used in material digital twin development. Further, the toolsets provided in this work allow for rapid development of composite material digital twins without user expertise in machine learning. This has enabled the development of an integrated workflow to automatically characterize and idealize composite microstructures and generate representative geometric models for efficient micromechanics analysis. Full article
(This article belongs to the Special Issue Applications of Data Science and Artificial Intelligence, 2nd Edition)
Show Figures

Figure 1

28 pages, 860 KB  
Article
Toward a Universal Framework for Gender Equality Certification
by Silvia Angeloni
Sustainability 2026, 18(8), 3699; https://doi.org/10.3390/su18083699 - 9 Apr 2026
Abstract
This study presents a comparative analysis of five gender equality certification schemes alongside the ISO 53800 standard with the aim of distilling shared conceptual foundations and design principles that can inform progress toward Sustainable Development Goal (SDG) 5 on gender equality. The comparative [...] Read more.
This study presents a comparative analysis of five gender equality certification schemes alongside the ISO 53800 standard with the aim of distilling shared conceptual foundations and design principles that can inform progress toward Sustainable Development Goal (SDG) 5 on gender equality. The comparative analysis reveals marked heterogeneity in scope, design architecture, indicators, and transparency. Methodologically, the study draws on the relevant literature, documentary evidence, and semi-structured consultations with five experts in gender equality, diversity management, auditing, and ESG reporting. Building on the most effective and robust features across gender equality schemes, the study proposes a universal framework for gender equality certification. Under this framework, an ideal universal certification model should apply the same core requirements to both public and private organizations, while including simplified procedures tailored to small- and medium-sized enterprises (SMEs). Moreover, the model should rely on a limited set of key performance indicators (KPIs), focusing on the most material dimensions and prioritizing quantitative measures. It should also strengthen employee feedback mechanisms and enhance accountability in corporate governance. The framework should also pay attention to intersectional dimensions, extend responsibility across the value chain, and address the gender-related implications of artificial intelligence (AI). Importantly, an ideal universal gender equality certification should ensure a high level of transparency through the public disclosure of certified organizations, assessment criteria, KPIs, and levels or scores achieved. Furthermore, it should be supported by a free digital self-assessment tool and robust auditing arrangements, underpinned by a sufficiently large pool of accredited certification bodies and gender-balanced audit teams. Finally, it should undergo periodic review and align with Environmental, Social, and Governance (ESG) principles and other related SDGs. Full article
Show Figures

Figure 1

29 pages, 542 KB  
Article
Beyond FinTech Adoption: How AI-Enabled Financial Process Digitalization Shapes Entrepreneurship
by Konstantinos S. Skandalis and Dimitra Skandali
FinTech 2026, 5(2), 31; https://doi.org/10.3390/fintech5020031 - 8 Apr 2026
Abstract
The digital transformation of entrepreneurial finance has progressed beyond basic FinTech adoption toward the deeper digitalization of financial processes and the integration of artificial intelligence (AI). Yet, firms, particularly non-financial SMEs, vary substantially in their ability to convert these technologies into superior entrepreneurial, [...] Read more.
The digital transformation of entrepreneurial finance has progressed beyond basic FinTech adoption toward the deeper digitalization of financial processes and the integration of artificial intelligence (AI). Yet, firms, particularly non-financial SMEs, vary substantially in their ability to convert these technologies into superior entrepreneurial, market, and financial outcomes. This study develops and tests a capability-based model explaining how FinTech-enabled financial process digitalization (FPD) and AI use shape entrepreneurship by influencing entrepreneurial performance outcomes. In line with current developments in digital finance, AI use is conceptualized as an embedded and complementary feature of FinTech-enabled financial process digitalization rather than an independent technological category. Drawing on the resource-based view and behavioral finance, we propose digital financial capability (DFC) as a central mechanism through which FinTech-enabled digitalized finance creates value, while credit fear is conceptualized as a behavioral constraint that limits entrepreneurial outcomes. We further posit customer satisfaction as a market-facing outcome linking financial capabilities to firm performance. Using survey data from 318 non-financial SMEs operating in Greece and applying Partial Least Squares Structural Equation Modeling (PLS-SEM), the findings show that FPD and AI use significantly enhance DFC, which in turn increases customer satisfaction and entrepreneurial performance. In addition, financial process digitalization reduces credit fear, thereby mitigating its negative impact on entrepreneurial performance. By shifting the focus from technology adoption toward AI-supported capability development within digitally enabled financial processes and behavioral mechanisms, this study advances FinTech and entrepreneurship research and offers actionable insights for managers and policymakers seeking to leverage digital finance for sustainable entrepreneurial value creation. Full article
Show Figures

Figure 1

41 pages, 84120 KB  
Article
DDS-over-TSN Framework for Time-Critical Applications in Industrial Metaverses
by Taemin Nam, Seongjin Yun and Won-Tae Kim
Appl. Sci. 2026, 16(8), 3641; https://doi.org/10.3390/app16083641 - 8 Apr 2026
Abstract
The industrial metaverse is a digital twin space that integrates the real world with virtual environments through bidirectional synchronization. It supports critical services, such as time-sensitive machine control and large-scale collaboration, which require Time-Sensitive Networking and scalable Data Distribution Services. DDS, developed by [...] Read more.
The industrial metaverse is a digital twin space that integrates the real world with virtual environments through bidirectional synchronization. It supports critical services, such as time-sensitive machine control and large-scale collaboration, which require Time-Sensitive Networking and scalable Data Distribution Services. DDS, developed by the Object Management Group, provides excellent scalability and diverse QoS policies but struggles to guarantee transmission delay and jitter for time-critical applications. TSN, based on IEEE 802.1 standards, addresses these challenges by ensuring time-criticality. However, current research lacks comprehensive integration mechanisms for DDS and TSN, particularly from the viewpoints of semantics and system framework. Additionally, there is no adaptive QoS mapping converting the abstract DDS QoS policies to the sophisticated TSN QoS parameters. This paper presents a novel DDS-over-TSN framework that incorporates three key functions to address these challenges. First, Cross-layer QoS Mapping automates correspondences between DDS and TSN parameters, deriving technical constraints from standard documentation through retrieval-augmented generation. Second, Semantic Priority Estimation extracts substantial priority levels by utilizing language model embedding vectors as high-dimensional feature extractors. Third, Adaptive Resource Allocation performs dynamic bandwidth distribution for each priority level through reinforcement learning. Simulation results reveal over 99% mapping accuracy and 97% consistency in priority extraction. The applied Deep Reinforcement Learning paradigm allocated 99% of required resources to high-priority classes and reduced resource wastage by 15% compared to conventional methods. This methodology meets industrial requirements by ensuring both deterministic real-time performance and efficient resource isolation. Full article
(This article belongs to the Special Issue Digital Twin and IoT, 2nd Edition)
Show Figures

Figure 1

49 pages, 675 KB  
Review
Automated Assembly of Large-Scale Aerospace Components: A Structured Narrative Survey of Emerging Technologies
by Kuai Zhou, Wenmin Chu, Peng Zhao, Xiaoxu Ji and Lulu Huang
Sensors 2026, 26(8), 2294; https://doi.org/10.3390/s26082294 - 8 Apr 2026
Abstract
Large-scale aerospace components (e.g., wings, fuselage sections, wing boxes, and rocket segments) feature large dimensions, low stiffness, complex interfaces, and strict assembly tolerances. Traditional rigid tooling and manual alignment struggle to meet the demands of high precision, efficiency, and flexibility in modern aerospace [...] Read more.
Large-scale aerospace components (e.g., wings, fuselage sections, wing boxes, and rocket segments) feature large dimensions, low stiffness, complex interfaces, and strict assembly tolerances. Traditional rigid tooling and manual alignment struggle to meet the demands of high precision, efficiency, and flexibility in modern aerospace manufacturing. This paper presents a structured literature review on the automated assembly of large-scale aerospace components, summarizing advances in three core domains: pose adjustment and positioning mechanisms, digital measurement technologies, and trajectory planning and control. Particular emphasis is placed on two cross-cutting themes: measurement uncertainty analysis and flexible assembly, which are critical for high-quality docking. The review classifies pose adjustment mechanisms into four categories (NC positioners, parallel kinematic machines, industrial robots, and novel mechanisms) and digital measurement into five branches (vision metrology, large-scale metrology, measurement field construction, uncertainty analysis, and auxiliary techniques). It also outlines five trajectory planning and control routes, covering traditional methods, multi-sensor fusion, digital twins, flexible assembly, and emerging intelligent approaches. The analysis reveals that current research suffers from fragmentation among mechanism design, metrology, and control, with insufficient integration of uncertainty propagation and flexible deformation modeling. Future systems will rely on heterogeneous equipment collaboration, uncertainty-aware closed-loop control, high-fidelity flexible modeling, and digital twin-driven decision-making. This review provides a unified framework and a technical reference for developing reliable, flexible, and scalable automated assembly systems for next-generation aerospace structures. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

24 pages, 4332 KB  
Article
Depth-Aware Adversarial Domain Adaptation for Cross-Domain Remote Sensing Segmentation
by Lulu Niu, Xiaoxuan Liu, Enze Zhu, Yidan Zhang, Hanru Shi, Xiaohe Li, Hong Wang, Jie Jia and Lei Wang
Remote Sens. 2026, 18(7), 1099; https://doi.org/10.3390/rs18071099 - 7 Apr 2026
Abstract
As a key task in remote sensing analysis, semantic segmentation of remote sensing images (RSI) underpins many practical applications. Despite its importance, obtaining dense pixel-wise annotations remains labor-intensive and time-consuming. Unsupervised domain adaptation (UDA) offers a promising solution by utilizing knowledge from labeled [...] Read more.
As a key task in remote sensing analysis, semantic segmentation of remote sensing images (RSI) underpins many practical applications. Despite its importance, obtaining dense pixel-wise annotations remains labor-intensive and time-consuming. Unsupervised domain adaptation (UDA) offers a promising solution by utilizing knowledge from labeled source domains for unlabeled target domains, yet its effectiveness is often compromised by significant distribution shifts arising from variations in imaging conditions. To address this challenge, we propose a depth-aware adaptation network (DAAN), a novel two-branch network that explicitly leverages complementary depth information from a digital surface model (DSM) to enhance cross-domain remote sensing segmentation. Unlike conventional UDA methods that primarily focus on semantic features, DAAN incorporates depth data to build a more generalized feature space. This network introduces three key components: an adaptive feature aggregator (AFA) for progressive semantic-depth feature fusion, a gated prediction selection unit (GPSU) that selectively integrates predictions to mitigate the impact of noisy depth measurements, and misalignment-focused residual refinement (MFRR) module that emphasizes poorly aligned target regions during training. Experiments on the ISPRS and GAMUS datasets demonstrate the effectiveness of the proposed method. In particular, DAAN achieves an mIoU of 50.53% and an F1 score of 65.75% for cross-domain segmentation on ISPRS to GAMUS, outperforming models without depth information by 9.17% and 8.99%, respectively. These results demonstrate the advantage of integrating auxiliary geometric information to improve model generalization on unlabeled remote sensing datasets, contributing to higher mapping accuracy, more reliable automated analysis, and enhanced decision-making support. Full article
Show Figures

Figure 1

30 pages, 1979 KB  
Article
Design Consistency and Aesthetic Experience in Digital Health Communication: A Mixed-Method Study of Lifestyle Medicine Product Ecosystems
by Yuexing Wang and Xin Ma
Healthcare 2026, 14(7), 964; https://doi.org/10.3390/healthcare14070964 - 7 Apr 2026
Viewed by 183
Abstract
Background/Objectives: Digital health ecosystems increasingly integrate content, behavioral interventions, and commercial offerings across multiple platforms. While design consistency is established as critical for trust in commercial contexts, its associations with health behavior change and objective health outcomes remain underexplored. This study examined how [...] Read more.
Background/Objectives: Digital health ecosystems increasingly integrate content, behavioral interventions, and commercial offerings across multiple platforms. While design consistency is established as critical for trust in commercial contexts, its associations with health behavior change and objective health outcomes remain underexplored. This study examined how cross-platform design consistency and aesthetic experience are associated with behavioral adoption through psychological pathways and investigated relationships between design-driven adoption and objective health outcomes. Methods: A convergent mixed-method design comprised five integrated studies: systematic content analysis of short-form videos (N = 200), expert evaluation and user testing (N = 33), a cross-sectional survey (N = 186), semi-structured interviews (N = 15), and a 3-month longitudinal health outcome analysis (N = 143). Structural equation modeling tested pathways from design features through psychological mediators and COM-B components (capability, opportunity, motivation) to behavioral adoption and health outcomes. Results: Design consistency was significantly associated with trust (β = 0.52), perceived value (β = 0.68), and reduced perceived risk (β = −0.41; all p < 0.001). Aesthetic experience predicted emotional resonance (β = 0.71, p < 0.001) and moderated design–trust associations. COM-B components mediated 75% of the intention-to-adoption pathway (total indirect effect = 0.51, p < 0.001). High-adoption users showed clinically meaningful improvements in weight (−2.8 kg, d = 0.89), HbA1c (−0.7%, d = 0.65), fasting glucose (−0.9 mmol/L, d = 0.72), and LDL-C (−0.4 mmol/L, d = 0.51) over three months. Conclusions: Within a single, influencer-centered Chinese digital health ecosystem, design consistency and aesthetic experience were significantly associated with trust, psychological readiness, and behavioral adoption. These findings are observational; randomized controlled trials and multi-site replication are required to establish causal mechanisms and assess generalizability. Full article
Show Figures

Figure 1

14 pages, 2424 KB  
Article
Personalized Prediction of Postoperative Recurrence in Lung Squamous Cell Carcinoma: Integrating AI-Based Nuclear Morphometry and Clinical Data
by Tomokazu Omori, Akira Saito, Yoshihisa Shimada, Yujin Kudo, Jun Matsubayashi, Toshitaka Nagao, Masahiko Kuroda and Norihiko Ikeda
J. Pers. Med. 2026, 16(4), 205; https://doi.org/10.3390/jpm16040205 - 6 Apr 2026
Viewed by 185
Abstract
Background: This study employed artificial intelligence (AI) to analyze quantitative nuclear morphological features obtained from digital pathology images to predict postoperative recurrence in patients with lung squamous cell carcinoma (LSQCC). We aimed to develop a prediction model that contributes to the realization of [...] Read more.
Background: This study employed artificial intelligence (AI) to analyze quantitative nuclear morphological features obtained from digital pathology images to predict postoperative recurrence in patients with lung squamous cell carcinoma (LSQCC). We aimed to develop a prediction model that contributes to the realization of ‘personalized postoperative management’ tailored to individual tumor biology by integrating AI-extracted morphological features with clinical information. Methods: A total of 185 of the 253 surgically resected LSQCC cases were included; 136 were randomly assigned to the training set and 49 to the test set. Nuclear features from manually selected regions of interest were extracted and used to build AI-based prediction models. Three recurrence models were developed: recurrence within 2 years, within 5 years, and a three-category model (≤2 years, 3–5 years, >5 years or no recurrence). Support vector machine (SVM) and random forest (RF) algorithms were applied to each, yielding six predictive models. An ensemble approach was used to calculate AI-based risk scores, and a “total risk score” was developed by integrating these with the pathologic stage. Results: All six AI models demonstrated stable predictive performance, with AUC values ranging from 0.76 to 0.91. Kaplan–Meier analysis showed that the total risk score provided the most precise risk stratification (p < 0.005), with clearer separation between risk groups than the AI-based risk score alone. Conclusions: The integration of AI-based nuclear morphology analysis and clinical data provides an objective and practical tool for personalized postoperative management in LSQCC. This approach enables tailored clinical decision-making by identifying patients at high risk for early recurrence and customizing postoperative treatment plans to meet the specific needs of each individual. Full article
(This article belongs to the Section Personalized Therapy in Clinical Medicine)
Show Figures

Graphical abstract

34 pages, 56063 KB  
Article
Deep Learning-Based Intelligent Analysis of Rock Thin Sections: From Cross-Scale Lithology Classification to Grain Segmentation for Quantitative Fabric Characterization
by Wenhao Yang, Ang Li, Liyan Zhang and Xiaoyao Qin
Electronics 2026, 15(7), 1509; https://doi.org/10.3390/electronics15071509 - 3 Apr 2026
Viewed by 233
Abstract
Quantitative microstructure evaluation of sedimentary rock thin sections is essential for revealing reservoir flow mechanisms and assessing reservoir quality. However, traditional manual identification is inefficient and prone to subjectivity. Although current deep learning approaches have improved efficiency, most remain confined to single tasks [...] Read more.
Quantitative microstructure evaluation of sedimentary rock thin sections is essential for revealing reservoir flow mechanisms and assessing reservoir quality. However, traditional manual identification is inefficient and prone to subjectivity. Although current deep learning approaches have improved efficiency, most remain confined to single tasks and lack a pathway to translate image recognition into quantifiable geological parameters. Moreover, these methods struggle with cross-scale feature extraction and accurate grain boundary localization in complex textures. To overcome these limitations, this study proposes a three-stage automated analysis framework integrating intelligent lithology identification, sandstone grain segmentation, and quantitative analysis of fabric parameters. To address scale discrepancies in lithology discrimination, Rock-PLionNet integrates a Partial-to-Whole Context Fusion (PWC-Fusion) module and the Lion optimizer, which mitigates cross-scale feature inconsistencies and enables accurate screening of target sandstone samples. Subsequently, to correct boundary deviations caused by low contrast and grain adhesion, the PetroSAM-CRF strategy integrates polarization-aware enhancement with dense conditional random field (DenseCRF)-based probabilistic refinement to extract precise grain contours. Based on these outputs, the framework automatically calculates key fabric parameters, including grain size and roundness. Experiments on 3290 original multi-source thin-section images show that Rock-PLionNet achieves a classification accuracy of 96.57% on the test set. Furthermore, PetroSAM-CRF reduces segmentation bias observed in general-purpose models under complex texture conditions, enabling accurate parameter estimation with a roundness error of 2.83%. Overall, this study presents an intelligent workflow linking microscopic image recognition with quantitative analysis of geological fabric parameters, providing a practical pathway for digital petrographic evaluation in hydrocarbon exploration. Full article
Show Figures

Figure 1

23 pages, 4047 KB  
Article
UAV-Based Estimation of Tea Leaf Area Index in Mountainous Terrain: Integrating Topographic Correction and Interpretable Machine Learning
by Na Lin, Jian Zhao, Huxiang Shao, Miaomiao Wang and Hong Chen
Sensors 2026, 26(7), 2218; https://doi.org/10.3390/s26072218 - 3 Apr 2026
Viewed by 216
Abstract
Leaf Area Index (LAI) is a fundamental parameter for characterizing the growth of tea (Camellia sinensis L.). However, in rugged mountainous regions, the combined effects of topographic relief and canopy structural heterogeneity severely constrain the accuracy of UAV-based multispectral LAI retrieval. This [...] Read more.
Leaf Area Index (LAI) is a fundamental parameter for characterizing the growth of tea (Camellia sinensis L.). However, in rugged mountainous regions, the combined effects of topographic relief and canopy structural heterogeneity severely constrain the accuracy of UAV-based multispectral LAI retrieval. This study develops an integrated framework combining topographic correction with interpretable machine learning to improve LAI estimation. We utilized a UAV multispectral dataset collected during the peak growing season from a typical tea-growing region in Fujian Province, China (altitude range: 58–186 m), comprising a total of 90 samples. Three topographic correction methods, including Sun–Canopy–Sensor (SCS), SCS with C correction (SCS+C), and Minnaert+SCS, were evaluated in combination with Linear Regression (LR), Decision Tree (DT), Random Forest (RF), and Extreme Gradient Boosting (XGBoost) models. Results indicated that the SCS+C algorithm outperformed other methods by effectively accounting for direct and diffuse radiation components, thereby reducing topographic dependence while maintaining radiometric consistency across heterogeneous surfaces. The XGBoost model combined with SCS+C correction achieved the highest performance (R2 = 0.8930, RMSE = 0.6676, nRMSE = 7.93%, MAE = 0.4936, Bias = −0.0836). SHapley Additive exPlanations (SHAP) analysis revealed a structure-dominated retrieval mechanism, in which red-band textural features (Correlation_R) exhibited higher importance than conventional vegetation indices. Compared with previous studies that primarily focus on either topographic correction or model development, this study provides quantitative insights into the underlying retrieval mechanisms. This framework improves the precision of tea LAI retrieval in complex terrains and provides a robust methodological basis for digital management in mountainous agriculture. Full article
(This article belongs to the Special Issue AI UAV-Based Systems for Agricultural Monitoring)
Show Figures

Figure 1

18 pages, 1160 KB  
Article
Predicting Physical Inactivity in Chilean Adults: A Comparison of Survey-Weighted Logistic Regression and Explainable Machine Learning Models
by Josivaldo de Souza-Lima, Rodrigo Yáñez-Sepúlveda, Frano Giakoni-Ramírez, Catalina Muñoz-Strale, Javiera Alarcon-Aguilar, Maribel Parra-Saldias, Daniel Duclos-Bastias, Andrés Godoy-Cumillaf, Eugenio Merellano-Navarro, José Bruneau-Chávez and Claudio Farias-Valenzuela
Data 2026, 11(4), 73; https://doi.org/10.3390/data11040073 - 3 Apr 2026
Viewed by 200
Abstract
Physical inactivity remains a major modifiable risk factor for non-communicable diseases and continues to exhibit marked socioeconomic and gender disparities in Latin America. Identifying robust and interpretable predictors of inactivity in nationally representative datasets is essential for informing public health strategies. This study [...] Read more.
Physical inactivity remains a major modifiable risk factor for non-communicable diseases and continues to exhibit marked socioeconomic and gender disparities in Latin America. Identifying robust and interpretable predictors of inactivity in nationally representative datasets is essential for informing public health strategies. This study compared a survey-weighted logistic regression model and an explainable machine learning approach (XGBoost) to predict physical inactivity among Chilean adults using data from the 2024 National Physical Activity and Sports Survey (ENAFyD; n = 5248). Models were evaluated on a stratified held-out test set (n = 1050) using weighted and unweighted area under the ROC curve (AUC), Brier scores, and calibration curves. Survey-weighted logistic regression achieved a weighted AUC of 0.801, while XGBoost achieved 0.797, demonstrating comparable discrimination. XGBoost showed marginally lower Brier scores, indicating slightly improved probabilistic calibration. Low socioeconomic status, female sex, lower monthly physical activity expenditure, limited facility access, and lower engagement with digital resources were consistently associated with higher inactivity risk. SHAP-style contribution analysis provided additional insight into feature-level influence within the machine learning framework. Overall, both approaches demonstrated similar predictive capacity, supporting the complementary use of classical regression and explainable machine learning for population-level physical inactivity research. Full article
Show Figures

Figure 1

18 pages, 1365 KB  
Article
DA-CycleGAN: Degradation-Adaptive Unpaired Super-Resolution for Historical Image Restoration
by Lujun Zhai, Yonghui Wang, Yu Zhou and Suxia Cui
J. Imaging 2026, 12(4), 155; https://doi.org/10.3390/jimaging12040155 - 3 Apr 2026
Viewed by 230
Abstract
Historical images as the dominant method for documenting the world and its inhabitants can help us to better understand the real history. Due to the limited camera technology, historical images captured in the early to mid-20th century tend to be very blurry, unclear, [...] Read more.
Historical images as the dominant method for documenting the world and its inhabitants can help us to better understand the real history. Due to the limited camera technology, historical images captured in the early to mid-20th century tend to be very blurry, unclear, noisy, and obscure. The goal of this paper is to super-resolve images for historical image restoration. Compared to the degradations in modern digital imagery, those in historical images have unique features that are typically much more complex and less well understood. The discrepancy between historical images and modern high-definition digital images leads to a significant performance drop for existing super-resolution (SR) models trained on modern digital imagery. To tackle this problem, we propose a new method, namely DA-CycleGAN. Specifically, the DA-CycleGAN is built on top of CycleGAN to achieve unsupervised learning. We introduce a degradation-adaptive (DA) module with strong, flexible adaptation to learn various unknown degradations from samples. Moreover, we collect a large dataset containing 10,000 low-resolution images from real historical films. The dataset features various natural degradations. Our experimental results demonstrate the superior performance of DA-CycleGAN and the effectiveness of our image dataset for achieving accurate super-resolution enhancement of historical images. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Figure 1

Back to TopTop