Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,943)

Search Parameters:
Keywords = learning to rank

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 437 KiB  
Article
A Copula-Driven CNN-LSTM Framework for Estimating Heterogeneous Treatment Effects in Multivariate Outcomes
by Jong-Min Kim
Mathematics 2025, 13(15), 2384; https://doi.org/10.3390/math13152384 - 24 Jul 2025
Abstract
Estimating heterogeneous treatment effects (HTEs) across multiple correlated outcomes poses significant challenges due to complex dependency structures and diverse data types. In this study, we propose a novel deep learning framework integrating empirical copula transformations with a CNN-LSTM (Convolutional Neural Networks and Long [...] Read more.
Estimating heterogeneous treatment effects (HTEs) across multiple correlated outcomes poses significant challenges due to complex dependency structures and diverse data types. In this study, we propose a novel deep learning framework integrating empirical copula transformations with a CNN-LSTM (Convolutional Neural Networks and Long Short-Term Memory networks) architecture to capture nonlinear dependencies and temporal dynamics in multivariate treatment effect estimation. The empirical copula transformation, a rank-based nonparametric approach, preprocesses input covariates to better represent the underlying joint distributions before modeling. We compare this method with a baseline CNN-LSTM model lacking copula preprocessing and a nonparametric tree-based approach, the Causal Forest, grounded in generalized random forests for HTE estimation. Our framework accommodates continuous, count, and censored survival outcomes simultaneously through a multitask learning setup with customized loss functions, including Cox partial likelihood for survival data. We evaluate model performance under varying treatment perturbation rates via extensive simulation studies, demonstrating that the Empirical Copula CNN-LSTM achieves superior accuracy and robustness in average treatment effect (ATE) and conditional average treatment effect (CATE) estimation. These results highlight the potential of copula-based deep learning models for causal inference in complex multivariate settings, offering valuable insights for personalized treatment strategies. Full article
(This article belongs to the Special Issue Current Developments in Theoretical and Applied Statistics)
Show Figures

Figure 1

36 pages, 5625 KiB  
Article
Behavior Prediction of Connections in Eco-Designed Thin-Walled Steel–Ply–Bamboo Structures Based on Machine Learning for Mechanical Properties
by Wanwan Xia, Yujie Gao, Zhenkai Zhang, Yuhan Jie, Jingwen Zhang, Yueying Cao, Qiuyue Wu, Tao Li, Wentao Ji and Yaoyuan Gao
Sustainability 2025, 17(15), 6753; https://doi.org/10.3390/su17156753 - 24 Jul 2025
Abstract
This study employed multiple machine learning and hyperparameter optimization techniques to analyze and predict the mechanical properties of self-drilling screw connections in thin-walled steel–ply–bamboo shear walls, leveraging the renewable and eco-friendly nature of bamboo to enhance structural sustainability and reduce environmental impact. The [...] Read more.
This study employed multiple machine learning and hyperparameter optimization techniques to analyze and predict the mechanical properties of self-drilling screw connections in thin-walled steel–ply–bamboo shear walls, leveraging the renewable and eco-friendly nature of bamboo to enhance structural sustainability and reduce environmental impact. The dataset, which included 249 sets of measurement data, was derived from 51 disparate connection specimens fabricated with engineered bamboo—a renewable and low-carbon construction material. Utilizing factor analysis, a ranking table recording the comprehensive score of each connection specimen was established to select the optimal connection type. Eight machine learning models were employed to analyze and predict the mechanical performance of these connection specimens. Through comparison, the most efficient model was selected, and five hyperparameter optimization algorithms were implemented to further enhance its prediction accuracy. The analysis results revealed that the Random Forest (RF) model demonstrated superior classification performance, prediction accuracy, and generalization ability, achieving approximately 61% accuracy on the test set (the highest among all models). In hyperparameter optimization, the RF model processed through Bayesian Optimization (BO) further improved its predictive accuracy to about 67%, outperforming both its non-optimized version and models optimized using the other algorithms. Considering the mechanical performance of connections within TWS composite structures, applying the BO algorithm to the RF model significantly improved the predictive accuracy. This approach enables the identification of the most suitable specimen type based on newly provided mechanical performance parameter sets, providing a data-driven pathway for sustainable bamboo–steel composite structure design. Full article
Show Figures

Figure 1

14 pages, 492 KiB  
Article
Learnable Priors Support Reconstruction in Diffuse Optical Tomography
by Alessandra Serianni, Alessandro Benfenati and Paola Causin
Photonics 2025, 12(8), 746; https://doi.org/10.3390/photonics12080746 - 24 Jul 2025
Abstract
Diffuse Optical Tomography (DOT) is a non-invasive medical imaging technique that makes use of Near-Infrared (NIR) light to recover the spatial distribution of optical coefficients in biological tissues for diagnostic purposes. Due to the intense scattering of light within tissues, the reconstruction process [...] Read more.
Diffuse Optical Tomography (DOT) is a non-invasive medical imaging technique that makes use of Near-Infrared (NIR) light to recover the spatial distribution of optical coefficients in biological tissues for diagnostic purposes. Due to the intense scattering of light within tissues, the reconstruction process inherent to DOT is severely ill-posed. In this paper, we propose to tackle the ill-conditioning by learning a prior over the solution space using an autoencoder-type neural network. Specifically, the decoder part of the autoencoder is used as a generative model. It maps a latent code to estimated physical parameters given in input to the forward model. The latent code is itself the result of an optimization loop which minimizes the discrepancy of the solution computed by the forward model with available observations. The structure and interpretability of the latent space are enhanced by minimizing the rank of its covariance matrix, thereby promoting more effective utilization of its information-carrying capacity. The deep learning-based prior significantly enhances reconstruction capabilities in this challenging domain, demonstrating the potential of integrating advanced neural network techniques into DOT. Full article
Show Figures

Figure 1

19 pages, 6555 KiB  
Article
Exploiting Structured Global and Neighbor Orders for Enhanced Ordinal Regression
by Imam Mustafa Kamal, Solichin Mochammad, Latifah Nurahmi, Azis Natawijaya and Muhammad Kalili
Information 2025, 16(8), 624; https://doi.org/10.3390/info16080624 - 22 Jul 2025
Viewed by 22
Abstract
Ordinal regression combines classification and regression techniques, constrained by the intrinsic order among categories. It has wide-ranging applications in real-world scenarios, such as product quality grading, medical diagnoses, and facial age recognition, where understanding ranked relationships is crucial. Existing models, which often employ [...] Read more.
Ordinal regression combines classification and regression techniques, constrained by the intrinsic order among categories. It has wide-ranging applications in real-world scenarios, such as product quality grading, medical diagnoses, and facial age recognition, where understanding ranked relationships is crucial. Existing models, which often employ a series of binary classifiers with ordinal consistency loss, effectively enforce global order consistency but frequently encounter misclassification errors between adjacent categories. Achieving both global and local (neighbor-level) ordinal consistency, however, remains a significant challenge. In this study, we propose a hybrid ordinal regression model that addresses global ordinal structure while enhancing local consistency between neighboring categories. Our approach leverages ordinal metric learning to generate embeddings that capture global ordinal relationships and extends consistent rank logits with a neighbor order penalty in the loss function to reduce adjacent category misclassifications. Experimental results on multiple benchmark ordinal datasets demonstrate that our model significantly minimizes neighboring misclassification errors and global order inconsistencies, outperforming existing ordinal regression models. Full article
Show Figures

Figure 1

21 pages, 2852 KiB  
Article
Innovative Hands-On Approach for Magnetic Resonance Imaging Education of an Undergraduate Medical Radiation Science Course in Australia: A Feasibility Study
by Curtise K. C. Ng, Sjoerd Vos, Hamed Moradi, Peter Fearns, Zhonghua Sun, Rebecca Dickson and Paul M. Parizel
Educ. Sci. 2025, 15(7), 930; https://doi.org/10.3390/educsci15070930 - 21 Jul 2025
Viewed by 109
Abstract
As yet, no study has investigated the use of a research magnetic resonance imaging (MRI) scanner to support undergraduate medical radiation science (MRS) students in developing their MRI knowledge and practical skills (competences). The purpose of this study was to test an innovative [...] Read more.
As yet, no study has investigated the use of a research magnetic resonance imaging (MRI) scanner to support undergraduate medical radiation science (MRS) students in developing their MRI knowledge and practical skills (competences). The purpose of this study was to test an innovative program for a total of 10 s- and third-year students of a MRS course to enhance their MRI competences. The study involved an experimental, two-week MRI learning program which focused on practical MRI scanning of phantoms and healthy volunteers. Pre- and post-program questionnaires and tests were used to evaluate the competence development of these participants as well as the program’s educational quality. Descriptive statistics, along with Wilcoxon signed-rank and paired t-tests, were used for statistical analysis. The program improved the participants’ self-perceived and actual MRI competences significantly (from an average of 2.80 to 3.20 out of 5.00, p = 0.046; and from an average of 34.87% to 62.72%, Cohen’s d effect size: 2.53, p < 0.001, respectively). Furthermore, they rated all aspects of the program’s educational quality highly (mean: 3.90–4.80 out of 5.00) and indicated that the program was extremely valuable, very effective, and practical. Nonetheless, further evaluation should be conducted in a broader setting with a larger sample size to validate the findings of this feasibility study, given the study’s small sample size and participant selection bias. Full article
(This article belongs to the Special Issue Technology-Enhanced Nursing and Health Education)
Show Figures

Figure 1

28 pages, 2518 KiB  
Article
Enhancing Keyword Spotting via NLP-Based Re-Ranking: Leveraging Semantic Relevance Feedback in the Handwritten Domain
by Stergios Papazis, Angelos P. Giotis and Christophoros Nikou
Electronics 2025, 14(14), 2900; https://doi.org/10.3390/electronics14142900 - 20 Jul 2025
Viewed by 137
Abstract
Handwritten Keyword Spotting (KWS) remains a challenging task, particularly in segmentation-free scenarios where word images must be retrieved and ranked based on their similarity to a query without relying on prior page-level segmentation. Traditional KWS methods primarily focus on visual similarity, often overlooking [...] Read more.
Handwritten Keyword Spotting (KWS) remains a challenging task, particularly in segmentation-free scenarios where word images must be retrieved and ranked based on their similarity to a query without relying on prior page-level segmentation. Traditional KWS methods primarily focus on visual similarity, often overlooking the underlying semantic relationships between words. In this work, we propose a novel NLP-driven re-ranking approach that refines the initial ranked lists produced by state-of-the-art KWS models. By leveraging semantic embeddings from pre-trained BERT-like Large Language Models (LLMs, e.g., RoBERTa, MPNet, and MiniLM), we introduce a relevance feedback mechanism that improves both verbatim and semantic keyword spotting. Our framework operates in two stages: (1) projecting retrieved word image transcriptions into a semantic space via LLMs and (2) re-ranking the retrieval list using a weighted combination of semantic and exact relevance scores based on pairwise similarities with the query. We evaluate our approach on the widely used George Washington (GW) and IAM collections using two cutting-edge segmentation-free KWS models, which are further integrated into our proposed pipeline. Our results show consistent gains in Mean Average Precision (mAP), with improvements of up to 2.3% (from 94.3% to 96.6%) on GW and 3% (from 79.15% to 82.12%) on IAM. Even when mAP gains are smaller, qualitative improvements emerge: semantically relevant but inexact matches are retrieved more frequently without compromising exact match recall. We further examine the effect of fine-tuning transformer-based OCR (TrOCR) models on historical GW data to align textual and visual features more effectively. Overall, our findings suggest that semantic feedback can enhance retrieval effectiveness in KWS pipelines, paving the way for lightweight hybrid vision-language approaches in handwritten document analysis. Full article
(This article belongs to the Special Issue AI Synergy: Vision, Language, and Modality)
Show Figures

Figure 1

16 pages, 5468 KiB  
Article
Alpine Meadow Fractional Vegetation Cover Estimation Using UAV-Aided Sentinel-2 Imagery
by Kai Du, Yi Shao, Naixin Yao, Hongyan Yu, Shaozhong Ma, Xufeng Mao, Litao Wang and Jianjun Wang
Sensors 2025, 25(14), 4506; https://doi.org/10.3390/s25144506 - 20 Jul 2025
Viewed by 181
Abstract
Fractional Vegetation Cover (FVC) is a crucial indicator describing vegetation conditions and provides essential data for ecosystem health assessments. However, due to the low and sparse vegetation in alpine meadows, it is challenging to obtain pure vegetation pixels from Sentinel-2 imagery, resulting in [...] Read more.
Fractional Vegetation Cover (FVC) is a crucial indicator describing vegetation conditions and provides essential data for ecosystem health assessments. However, due to the low and sparse vegetation in alpine meadows, it is challenging to obtain pure vegetation pixels from Sentinel-2 imagery, resulting in errors in the FVC estimation using traditional pixel dichotomy models. This study integrated Sentinel-2 imagery with unmanned aerial vehicle (UAV) data and utilized the pixel dichotomy model together with four machine learning algorithms, namely Random Forest (RF), Extreme Gradient Boosting (XGBoost), Light Gradient Boosting Machine (LightGBM), and Deep Neural Network (DNN), to estimate FVC in an alpine meadow region. First, FVC was preliminarily estimated using the pixel dichotomy model combined with nine vegetation indices applied to Sentinel-2 imagery. The performance of these estimates was evaluated against reference FVC values derived from centimeter-level UAV data. Subsequently, four machine learning models were employed for an accurate FVC inversion, using the estimated FVC values and UAV-derived reference FVC as inputs, following feature importance ranking and model parameter optimization. The results showed that: (1) Machine learning algorithms based on Sentinel-2 and UAV imagery effectively improved the accuracy of FVC estimation in alpine meadows. The DNN-based FVC estimation performed best, with a coefficient of determination of 0.82 and a root mean square error (RMSE) of 0.09. (2) In vegetation coverage estimation based on the pixel dichotomy model, different vegetation indices demonstrated varying performances across areas with different FVC levels. The GNDVI-based FVC achieved a higher accuracy (RMSE = 0.08) in high-vegetation coverage areas (FVC > 0.7), while the NIRv-based FVC and the SR-based FVC performed better (RMSE = 0.10) in low-vegetation coverage areas (FVC < 0.4). The method provided in this study can significantly enhance FVC estimation accuracy with limited fieldwork, contributing to alpine meadow monitoring on the Qinghai–Tibet Plateau. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Graphical abstract

20 pages, 320 KiB  
Article
Integrating Digital Tools with Origami Activities to Enhance Geometric Concepts and Creative Thinking in Kindergarten Education
by Kawthar M. Habeeb
Educ. Sci. 2025, 15(7), 924; https://doi.org/10.3390/educsci15070924 - 20 Jul 2025
Viewed by 215
Abstract
This study investigated the effectiveness of integrating digital tools with origami activities to enhance geometric understanding and creative thinking among kindergarten children in Kuwait. A quasi-experimental pre-test–post-test design involved 60 children (aged from 5 years and 9 months to 6 years), who were [...] Read more.
This study investigated the effectiveness of integrating digital tools with origami activities to enhance geometric understanding and creative thinking among kindergarten children in Kuwait. A quasi-experimental pre-test–post-test design involved 60 children (aged from 5 years and 9 months to 6 years), who were randomly assigned to experimental (n = 30) and control (n = 30) groups. The experimental group received a four-week intervention using the Paperama app and paper folding, while the control group followed the standard curriculum. Wilcoxon signed-rank tests showed significant gains in the experimental group’s geometric understanding (Z = 3.82; p < 0.001) and creative thinking (Z = 4.15; p < 0.001), with large effect sizes (r = 0.78). Descriptive analysis further revealed that the experimental group outperformed the control group in post-test scores for geometric understanding (M = 84.06 vs. M = 74.39), reinforcing the intervention’s practical impact. The control group showed no significant improvement (p = 0.16). These findings highlight the value of blended origami instruction in developing spatial reasoning and creativity. This study contributes to early STEAM education and supports the integration of digital tools into kindergarten learning and teacher training. Full article
27 pages, 3704 KiB  
Article
Explainable Machine Learning and Predictive Statistics for Sustainable Photovoltaic Power Prediction on Areal Meteorological Variables
by Sajjad Nematzadeh and Vedat Esen
Appl. Sci. 2025, 15(14), 8005; https://doi.org/10.3390/app15148005 - 18 Jul 2025
Viewed by 183
Abstract
Precisely predicting photovoltaic (PV) output is crucial for reliable grid integration; so far, most models rely on site-specific sensor data or treat large meteorological datasets as black boxes. This study proposes an explainable machine-learning framework that simultaneously ranks the most informative weather parameters [...] Read more.
Precisely predicting photovoltaic (PV) output is crucial for reliable grid integration; so far, most models rely on site-specific sensor data or treat large meteorological datasets as black boxes. This study proposes an explainable machine-learning framework that simultaneously ranks the most informative weather parameters and reveals their physical relevance to PV generation. Starting from 27 local and plant-level variables recorded at 15 min resolution for a 1 MW array in Çanakkale region, Türkiye (1 August 2022–3 August 2024), we apply a three-stage feature-selection pipeline: (i) variance filtering, (ii) hierarchical correlation clustering with Ward linkage, and (iii) a meta-heuristic optimizer that maximizes a neural-network R2 while penalizing poor or redundant inputs. The resulting subset, dominated by apparent temperature and diffuse, direct, global-tilted, and terrestrial irradiance, reduces dimensionality without significantly degrading accuracy. Feature importance is then quantified through two complementary aspects: (a) tree-based permutation scores extracted from a set of ensemble models and (b) information gain computed over random feature combinations. Both views converge on shortwave, direct, and global-tilted irradiance as the primary drivers of active power. Using only the selected features, the best model attains an average R2 ≅ 0.91 on unseen data. By utilizing transparent feature-reduction techniques and explainable importance metrics, the proposed approach delivers compact, more generalized, and reliable PV forecasts that generalize to sites lacking embedded sensor networks, and it provides actionable insights for plant siting, sensor prioritization, and grid-operation strategies. Full article
Show Figures

Figure 1

17 pages, 3612 KiB  
Article
MPVT: An Efficient Multi-Modal Prompt Vision Tracker for Visual Target Tracking
by Jianyu Xie, Yan Fu, Junlin Zhou, Tianxiang He, Xiaopeng Wang, Yuke Fang and Duanbing Chen
Appl. Sci. 2025, 15(14), 7967; https://doi.org/10.3390/app15147967 - 17 Jul 2025
Viewed by 133
Abstract
Visual target tracking is a fundamental task in computer vision. Combining multi-modal information with tracking leverages complementary information, which improves the precision and robustness of trackers. Traditional multi-modal tracking methods typically employ a full fine-tuning scheme, i.e., fine-tuning pre-trained single-modal models to multi-modal [...] Read more.
Visual target tracking is a fundamental task in computer vision. Combining multi-modal information with tracking leverages complementary information, which improves the precision and robustness of trackers. Traditional multi-modal tracking methods typically employ a full fine-tuning scheme, i.e., fine-tuning pre-trained single-modal models to multi-modal tasks. However, this approach suffers from low transfer learning efficiency, catastrophic forgetting, and high cross-task deployment costs. To address these issues, we propose an efficient model named multi-modal prompt vision tracker (MPVT) based on an efficient prompt-tuning paradigm. Three key components are involved in the model: a decoupled input enhancement module, a dynamic adaptive prompt fusion module, and a fully connected head network module. The decoupled input enhancement module enhances input representations via positional and type embedding. The dynamic adaptive prompt fusion module achieves efficient prompt tuning and multi-modal interaction using scaled convolution and low-rank cross-modal attention mechanisms. The fully connected head network module addresses the shortcomings of traditional convolutional head networks such as inductive biases. Experimental results from RGB-T, RGB-D, and RGB-E scenarios show that MPVT outperforms state-of-the-art methods. Moreover, MPVT can save 43.8% GPU memory usage and reduce training time by 62.9% compared with a full-parameter fine-tuning model. Full article
(This article belongs to the Special Issue Advanced Technologies Applied for Object Detection and Tracking)
Show Figures

Figure 1

17 pages, 2879 KiB  
Article
The Impact of Integrating 3D-Printed Phantom Heads of Newborns with Cleft Lip and Palate into an Undergraduate Orthodontic Curriculum: A Comparison of Learning Outcomes and Student Perception
by Sarah Bühling, Jakob Stuhlfelder, Hedi Xandt, Sara Eslami, Lukas Benedikt Seifert, Robert Sader, Stefan Kopp, Nicolas Plein and Babak Sayahpour
Dent. J. 2025, 13(7), 323; https://doi.org/10.3390/dj13070323 - 16 Jul 2025
Viewed by 202
Abstract
Background/Objectives: This prospective intervention study examined the learning effect of using 3D-printed phantom heads with cleft lip and palate (CLP) and upper jaw models with CLP and maxillary plates during a lecture for dental students in their fourth year at J. W. [...] Read more.
Background/Objectives: This prospective intervention study examined the learning effect of using 3D-printed phantom heads with cleft lip and palate (CLP) and upper jaw models with CLP and maxillary plates during a lecture for dental students in their fourth year at J. W. Goethe Frankfurt University. The primary aim was to evaluate the impact of 3D-printed models on students’ satisfaction levels along with their understanding and knowledge in dental education. Methods: Six life-sized phantom heads with removable mandibles (three with unilateral and three with bilateral CLP) were designed using ZBrush software (Pixologic Inc., Los Angeles, CA, USA) based on MRI images and printed with an Asiga Pro 4K 3D printer (Asiga, Sydney, Australia). Two groups of students (n = 81) participated in this study: the control (CTR) group (n = 39) attended a standard lecture on cleft lip and palate, while the intervention (INT) group (n = 42) participated in a hands-on seminar with the same theoretical content, supplemented by 3D-printed models. Before and after the session, students completed self-assessment questionnaires and a multiple-choice test to evaluate knowledge improvement. Data analysis was conducted using the chi-square test for individual questions and the Wilcoxon rank test for knowledge gain, with the significance level set at 0.05. Results: The study demonstrated a significant knowledge increase in both groups following the lecture (p < 0.001). Similarly, there were significant differences in students’ self-assessments before and after the session (p < 0.001). The knowledge gain in the INT group regarding the anatomical features of unilateral cleft lip and palate was significantly higher compared to that in the CTR group (p < 0.05). Conclusions: The results of this study demonstrate the measurable added value of using 3D-printed models in dental education, particularly in enhancing students’ understanding of the anatomy of cleft lip and palate. Full article
(This article belongs to the Special Issue Dental Education: Innovation and Challenge)
Show Figures

Figure 1

20 pages, 3672 KiB  
Article
Identification of Complicated Lithology with Machine Learning
by Liangyu Chen, Lang Hu, Jintao Xin, Qiuyuan Hou, Jianwei Fu, Yonggui Li and Zhi Chen
Appl. Sci. 2025, 15(14), 7923; https://doi.org/10.3390/app15147923 - 16 Jul 2025
Viewed by 112
Abstract
Lithology identification is one of the most important research areas in petroleum engineering, including reservoir characterization, formation evaluation, and reservoir modeling. Due to the complex structural environment, diverse lithofacies types, and differences in logging data and core data recording standards, there is significant [...] Read more.
Lithology identification is one of the most important research areas in petroleum engineering, including reservoir characterization, formation evaluation, and reservoir modeling. Due to the complex structural environment, diverse lithofacies types, and differences in logging data and core data recording standards, there is significant overlap in the logging responses between different lithologies in the second member of the Lucaogou Formation in the Santanghu Basin. Machine learning methods have demonstrated powerful nonlinear capabilities that have a strong advantage in addressing complex nonlinear relationships between data. In this paper, based on felsic content, the lithologies in the study area are classified into four categories from high to low: tuff, dolomitic tuff, tuffaceous dolomite, and dolomite. We also study select logging attributes that are sensitive to lithology, such as natural gamma, acoustic travel time, neutron, and compensated density. Using machine learning methods, XGBoost, random forest, and support vector regression were selected to conduct lithology identification and favorable reservoir prediction in the study. The prediction results show that when trained with 80% of the predictors, the prediction performance of all three models has improved to varying degrees. Among them, Random Forest performed best in predicting felsic content, with an MAE of 0.11, an MSE of 0.020, an RMSE of 0.14, and a R2 of 0.43. XGBoost ranked second, with an MAE of 0.12, an MSE of 0.022, an RMSE of 0.15, and an R2 of 0.42. SVR performed the poorest. By comparing the actual core data with the predicted data, it was found that the results are relatively close to the XRD results, indicating that the prediction accuracy is high. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

11 pages, 1218 KiB  
Article
Predictive Ability of an Objective and Time-Saving Blastocyst Scoring Model on Live Birth
by Bing-Xin Ma, Feng Zhou, Guang-Nian Zhao, Lei Jin and Bo Huang
Biomedicines 2025, 13(7), 1734; https://doi.org/10.3390/biomedicines13071734 - 15 Jul 2025
Viewed by 306
Abstract
Objectives: With the development of artificial intelligence technology in medicine, an intelligent deep learning-based embryo scoring system (iDAScore) has been developed on full-time lapse sequences of embryos. It automatically ranks embryos according to the likelihood of achieving a fetal heartbeat with no manual [...] Read more.
Objectives: With the development of artificial intelligence technology in medicine, an intelligent deep learning-based embryo scoring system (iDAScore) has been developed on full-time lapse sequences of embryos. It automatically ranks embryos according to the likelihood of achieving a fetal heartbeat with no manual input from embryologists. To ensure its performance, external validation studies should be performed at multiple clinics. Methods: A total of 6291 single vitrified–thawed blastocyst transfer cycles from 2018 to 2021 at the Reproductive Medicine Center, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology were retrospectively analyzed by the iDAScore model. Patients with two or more blastocysts transferred and blastocysts that were not cultured in a time-lapse incubator were excluded. Blastocysts were divided into four comparably sized groups by first sorting their iDAScore values in ascending order and then compared with the clinical, perinatal, and neonatal outcomes. Results: Our results showed that clinical pregnancy, miscarriage, and live birth significantly correlated with iDAScore (p < 0.001). For perinatal and neonatal outcomes, no significant difference was shown in four iDAScore groups, except sex ratio. Uni- and multivariable logistic regressions showed that iDAScore was significantly positively correlated with live birth rate (p < 0.05). Conclusions: In conclusion, the objective ranking can prioritize embryos reliably and rapidly for transfer, which could allow embryologists more time for processes requiring hands-on procedures. Full article
(This article belongs to the Special Issue The Art of ART (Assisted Reproductive Technologies))
Show Figures

Figure 1

23 pages, 29759 KiB  
Article
UAV-Satellite Cross-View Image Matching Based on Adaptive Threshold-Guided Ring Partitioning Framework
by Yushi Liao, Juan Su, Decao Ma and Chao Niu
Remote Sens. 2025, 17(14), 2448; https://doi.org/10.3390/rs17142448 - 15 Jul 2025
Viewed by 280
Abstract
Cross-view image matching between UAV and satellite platforms is critical for geographic localization but remains challenging due to domain gaps caused by disparities in imaging sensors, viewpoints, and illumination conditions. To address these challenges, this paper proposes an Adaptive Threshold-guided Ring Partitioning Framework [...] Read more.
Cross-view image matching between UAV and satellite platforms is critical for geographic localization but remains challenging due to domain gaps caused by disparities in imaging sensors, viewpoints, and illumination conditions. To address these challenges, this paper proposes an Adaptive Threshold-guided Ring Partitioning Framework (ATRPF) for UAV–satellite cross-view image matching. Unlike conventional ring-based methods with fixed partitioning rules, ATRPF innovatively incorporates heatmap-guided adaptive thresholds and learnable hyperparameters to dynamically adjust ring-wise feature extraction regions, significantly enhancing cross-domain representation learning through context-aware adaptability. The framework synergizes three core components: brightness-aligned preprocessing to reduce illumination-induced domain shifts, hybrid loss functions to improve feature discriminability across domains, and keypoint-aware re-ranking to refine retrieval results by compensating for neural networks’ localization uncertainty. Comprehensive evaluations on the University-1652 benchmark demonstrate the framework’s superiority; it achieves 82.50% Recall@1 and 84.28% AP for UAV→Satellite geo-localization, along with 90.87% Recall@1 and 80.25% AP for Satellite→UAV navigation. These results validate the framework’s capability to bridge UAV–satellite domain gaps while maintaining robust matching precision under heterogeneous imaging conditions, providing a viable solution for practical applications such as UAV navigation in GNSS-denied environments. Full article
(This article belongs to the Special Issue Temporal and Spatial Analysis of Multi-Source Remote Sensing Images)
Show Figures

Figure 1

19 pages, 3619 KiB  
Article
An Adaptive Underwater Image Enhancement Framework Combining Structural Detail Enhancement and Unsupervised Deep Fusion
by Semih Kahveci and Erdinç Avaroğlu
Appl. Sci. 2025, 15(14), 7883; https://doi.org/10.3390/app15147883 - 15 Jul 2025
Viewed by 143
Abstract
The underwater environment severely degrades image quality by absorbing and scattering light. This causes significant challenges, including non-uniform illumination, low contrast, color distortion, and blurring. These degradations compromise the performance of critical underwater applications, including water quality monitoring, object detection, and identification. To [...] Read more.
The underwater environment severely degrades image quality by absorbing and scattering light. This causes significant challenges, including non-uniform illumination, low contrast, color distortion, and blurring. These degradations compromise the performance of critical underwater applications, including water quality monitoring, object detection, and identification. To address these issues, this study proposes a detail-oriented hybrid framework for underwater image enhancement that synergizes the strengths of traditional image processing with the powerful feature extraction capabilities of unsupervised deep learning. Our framework introduces a novel multi-scale detail enhancement unit to accentuate structural information, followed by a Latent Low-Rank Representation (LatLRR)-based simplification step. This unique combination effectively suppresses common artifacts like oversharpening, spurious edges, and noise by decomposing the image into meaningful subspaces. The principal structural features are then optimally combined with a gamma-corrected luminance channel using an unsupervised MU-Fusion network, achieving a balanced optimization of both global contrast and local details. The experimental results on the challenging Test-C60 and OceanDark datasets demonstrate that our method consistently outperforms state-of-the-art fusion-based approaches, achieving average improvements of 7.5% in UIQM, 6% in IL-NIQE, and 3% in AG. Wilcoxon signed-rank tests confirm that these performance gains are statistically significant (p < 0.01). Consequently, the proposed method significantly mitigates prevalent issues such as color aberration, detail loss, and artificial haze, which are frequently encountered in existing techniques. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop