Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,241)

Search Parameters:
Keywords = dataset recommender

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
36 pages, 462 KB  
Article
No Reproducibility, No Progress: Rethinking CT Benchmarking
by Dmitry Polevoy, Danil Kazimirov, Marat Gilmanov and Dmitry Nikolaev
J. Imaging 2025, 11(10), 344; https://doi.org/10.3390/jimaging11100344 - 2 Oct 2025
Abstract
Reproducibility is a cornerstone of scientific progress, yet in X-ray computed tomography (CT) reconstruction, it remains a critical and unresolved challenge. Current benchmarking practices in CT are hampered by the scarcity of openly available datasets, the incomplete or task-specific nature of existing resources, [...] Read more.
Reproducibility is a cornerstone of scientific progress, yet in X-ray computed tomography (CT) reconstruction, it remains a critical and unresolved challenge. Current benchmarking practices in CT are hampered by the scarcity of openly available datasets, the incomplete or task-specific nature of existing resources, and the lack of transparent implementations of widely used methods and evaluation metrics. As a result, even the fundamental property of reproducibility is frequently violated, undermining objective comparison and slowing methodological progress. In this work, we analyze the systemic limitations of current CT benchmarking, drawing parallels with broader reproducibility issues across scientific domains. We propose an extended data model and formalized schemes for data preparation and quality assessment, designed to improve reproducibility and broaden the applicability of CT datasets across multiple tasks. Building on these schemes, we introduce checklists for dataset construction and quality assessment, offering a foundation for reliable and reproducible benchmarking pipelines. A key aspect of our recommendations is the integration of virtual CT (vCT), which provides highly realistic data and analytically computable phantoms, yet remains underutilized despite its potential to overcome many current barriers. Our work represents a first step toward a methodological framework for reproducible benchmarking in CT. This framework aims to enable transparent, rigorous, and comparable evaluation of reconstruction methods, ultimately supporting their reliable adoption in clinical and industrial applications. Full article
(This article belongs to the Special Issue Tools and Techniques for Improving Radiological Imaging Applications)
Show Figures

Figure 1

44 pages, 7867 KB  
Article
Bridging AI and Maintenance: Fault Diagnosis in Industrial Air-Cooling Systems Using Deep Learning and Sensor Data
by Ioannis Polymeropoulos, Stavros Bezyrgiannidis, Eleni Vrochidou and George A. Papakostas
Machines 2025, 13(10), 909; https://doi.org/10.3390/machines13100909 - 2 Oct 2025
Abstract
This work aims towards the automatic detection of faults in industrial air-cooling equipment used in a production line for staple fibers and ultimately provides maintenance scheduling recommendations to ensure seamless operation. In this context, various deep learning models are tested to ultimately define [...] Read more.
This work aims towards the automatic detection of faults in industrial air-cooling equipment used in a production line for staple fibers and ultimately provides maintenance scheduling recommendations to ensure seamless operation. In this context, various deep learning models are tested to ultimately define the most effective one for the intended scope. In the examined system, four vibration and temperature sensors are used, each positioned radially on the motor body near the rolling bearing of the motor shaft—a typical setup in many industrial environments. Thus, by collecting and using data from the latter sources, this work exhaustively investigates the feasibility of accurately diagnosing faults in staple fiber cooling fans. The dataset is acquired and constructed under real production conditions, including variations in rotational speed, motor load, and three fault priorities, depending on the model detection accuracy, product specification, and maintenance requirements. Fault identification for training purposes involves analyzing and evaluating daily maintenance logs for this equipment. Experimental evaluation on real production data demonstrated that the proposed ResNet50-1D model achieved the highest overall classification accuracy of 97.77%, while effectively resolving the persistent misclassification of the faulty impeller observed in all the other models. Complementary evaluation confirmed its robustness, cross-machine generalization, and suitability for practical deployment, while the integration of predictions with maintenance logs enables a severity-based prioritization strategy that supports actionable maintenance planning.deep learning; fault classification; industrial air-cooling; industrial automation; maintenance scheduling; vibration analysis Full article
24 pages, 9336 KB  
Article
Temporal-Aware and Intent Contrastive Learning for Sequential Recommendation
by Yuan Zhang, Yaqin Fan, Tiantian Sheng and Aoshuang Wang
Symmetry 2025, 17(10), 1634; https://doi.org/10.3390/sym17101634 - 2 Oct 2025
Abstract
In recent years, research in sequential recommendation has primarily refined user intent by constructing sequence-level contrastive learning tasks through data augmentation or by extracting preference information from the latent space of user behavior sequences. However, existing methods suffer from two critical limitations. Firstly, [...] Read more.
In recent years, research in sequential recommendation has primarily refined user intent by constructing sequence-level contrastive learning tasks through data augmentation or by extracting preference information from the latent space of user behavior sequences. However, existing methods suffer from two critical limitations. Firstly, they fail to account for how random data augmentation may introduce unreasonable item associations in contrastive learning samples, thereby perturbing sequential semantic relationships. Secondly, the neglect of temporal dependencies may prevent models from effectively distinguishing between incidental behaviors and stable intentions, ultimately impairing the learning of user intent representations. To address these limitations, we propose TCLRec, a novel temporal-aware and intent contrastive learning framework for sequential recommendation, incorporating symmetry into its architecture. During the data augmentation phase, the model employs a symmetrical contrastive learning architecture and incorporates semantic enhancement operators to integrate user preferences. By introducing user rating information into both branches of the contrastive learning framework, this approach effectively enhances the semantic relevance between positive sample pairs. Furthermore, in the intent contrastive learning phase, TCLRec adaptively attenuates noise information in the frequency domain through learnable filters, while in the pre-training phase of sequence-level contrastive learning, it introduces a temporal-aware network that utilizes additional self-supervised signals to assist the model in capturing both long-term dependencies and short-term interests from user behavior sequences. The model employs a multi-task training strategy that alternately performs intent contrastive learning and sequential recommendation tasks to jointly optimize user intent representations. Comprehensive experiments conducted on the Beauty, Sports, and LastFM datasets demonstrate the soundness and effectiveness of TCLRec, where the incorporation of symmetry enhances the model’s capability to represent user intentions. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

14 pages, 2409 KB  
Article
Predicting Plant Breeder Decisions Across Multiple Selection Stages in a Wheat Breeding Program
by Sebastian Michel, Franziska Löschenberger, Christian Ametz, Herbert Bistrich and Hermann Bürstmayr
Crops 2025, 5(5), 69; https://doi.org/10.3390/crops5050069 - 2 Oct 2025
Abstract
Selection decisions in plant breeding programs are complex, and breeders aim to integrate phenotypic impressions, genotypic data, and agronomic performance across multiple selection stages to develop successful varieties. This study investigates whether such decisions can be predicted in a commercial winter wheat ( [...] Read more.
Selection decisions in plant breeding programs are complex, and breeders aim to integrate phenotypic impressions, genotypic data, and agronomic performance across multiple selection stages to develop successful varieties. This study investigates whether such decisions can be predicted in a commercial winter wheat (Triticum aestivum L.) breeding program using elastic net models trained on genome-wide distributed markers and genomic estimated breeding values. For this purpose, a dataset of several thousand lines tested between 2015 and 2019 in preliminary, advanced, and elite multi-environment yield trials was analyzed across three decision-making scenarios. The predictive models achieved a higher precision than random selection in all scenarios, with an increased performance when genomic estimated breeding values were included as predictors. Comparisons of breeder selections and model recommendations in terms of selection differentials for key agronomic traits showed a substantial overlap in breeding objectives, while both the breeder’s decisions and the model’s suggestions maintained similar levels of genetic diversity. Although the precision of the elastic net model was of moderate magnitude, divergent model recommendations often identified promising alternative lines, highlighting the potential of artificial intelligence to support decision-making in plant breeding. Full article
Show Figures

Figure 1

31 pages, 1105 KB  
Article
MoCap-Impute: A Comprehensive Benchmark and Comparative Analysis of Imputation Methods for IMU-Based Motion Capture Data
by Mahmoud Bekhit, Ahmad Salah, Ahmed Salim Alrawahi, Tarek Attia, Ahmed Ali, Esraa Eldesouky and Ahmed Fathalla
Information 2025, 16(10), 851; https://doi.org/10.3390/info16100851 - 1 Oct 2025
Abstract
Motion capture (MoCap) data derived from wearable Inertial Measurement Units is essential to applications in sports science and healthcare robotics. However, a significant amount of the potential of this data is limited due to missing data derived from sensor limitations, network issues, and [...] Read more.
Motion capture (MoCap) data derived from wearable Inertial Measurement Units is essential to applications in sports science and healthcare robotics. However, a significant amount of the potential of this data is limited due to missing data derived from sensor limitations, network issues, and environmental interference. Such limitations can introduce bias, prevent the fusion of critical data streams, and ultimately compromise the integrity of human activity analysis. Despite the plethora of data imputation techniques available, there have been few systematic performance evaluations of these techniques explicitly for the time series data of IMU-derived MoCap data. We address this by evaluating the imputation performance across three distinct contexts: univariate time series, multivariate across players, and multivariate across kinematic angles. To address this limitation, we propose a systematic comparative analysis of imputation techniques, including statistical, machine learning, and deep learning techniques, in this paper. We also introduce the first publicly available MoCap dataset specifically for the purpose of benchmarking missing value imputation, with three missingness mechanisms: missing completely at random, block missingness, and a simulated value-dependent missingness pattern simulated at the signal transition points. Using data from 53 karate practitioners performing standardized movements, we artificially generated missing values to create controlled experimental conditions. We performed experiments across the 53 subjects with 39 kinematic variables, which showed that discriminating between univariate and multivariate imputation frameworks demonstrates that multivariate imputation frameworks surpassunivariate approaches when working with more complex missingness mechanisms. Specifically, multivariate approaches achieved up to a 50% error reduction (with the MAE improving from 10.8 ± 6.9 to 5.8 ± 5.5) compared to univariate methods for transition point missingness. Specialized time series deep learning models (i.e., SAITS, BRITS, GRU-D) demonstrated a superior performance with MAE values consistently below 8.0 for univariate contexts and below 3.2 for multivariate contexts across all missing data percentages, significantly surpassing traditional machine learning and statistical methods. Notable traditional methods such as Generative Adversarial Imputation Networks and Iterative Imputers exhibited a competitive performance but remained less stable than the specialized temporal models. This work offers an important baseline for future studies, in addition to recommendations for researchers looking to increase the accuracy and robustness of MoCap data analysis, as well as integrity and trustworthiness. Full article
(This article belongs to the Section Information Processes)
25 pages, 1159 KB  
Article
Optimizing Agricultural Management Practices for Maize Crops: Integrating Clusterwise Linear Regression with an Adaptation of the Grey Wolf Optimizer
by Germán-Homero Morán-Figueroa, Carlos-Alberto Cobos-Lozada and Oscar-Fernando Bedoya-Leyva
Agriculture 2025, 15(19), 2068; https://doi.org/10.3390/agriculture15192068 - 1 Oct 2025
Abstract
Effectively managing agricultural practices is crucial for maximizing yield, reducing investment costs, preserving soil health, ensuring sustainability, and mitigating environmental impact. This study proposes an adaptation of the Grey Wolf Optimizer (GWO) metaheuristic to operate under specific constraints, with the goal of identifying [...] Read more.
Effectively managing agricultural practices is crucial for maximizing yield, reducing investment costs, preserving soil health, ensuring sustainability, and mitigating environmental impact. This study proposes an adaptation of the Grey Wolf Optimizer (GWO) metaheuristic to operate under specific constraints, with the goal of identifying optimal agricultural practices that boost maize crop yields and enhance economic profitability for each farm. To achieve this objective, we employ a probabilistic algorithm that constructs a model based on Clusterwise Linear Regression (CLR) as the primary method for predicting crop yield. This model considers several factors, including climate, soil conditions, and agricultural practices, which can vary depending on the specific location of the crop. We compare the performance of the Grey Wolf Optimizer (GWO) algorithm with other optimization techniques, including Hill Climbing (HC) and Simulated Annealing (SA). This analysis utilizes a dataset of maize crops from the Department of Córdoba in Colombia, where agricultural practices were optimized. The results indicate that the probabilistic algorithm defines a two-group CLR model as the best approach for predicting maize yield, achieving a 5% higher fit compared to other machine learning algorithms. Furthermore, the Grey Wolf Optimizer (GWO) metaheuristic achieved the best optimization performance, recommending agricultural practices that increased farm yield and profitability by 50% relative to the original practices. Overall, these findings demonstrate that the proposed algorithm can recommend optimal practices that are both technically feasible and economically viable for implementation and replication. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
90 pages, 29362 KB  
Review
AI for Wildfire Management: From Prediction to Detection, Simulation, and Impact Analysis—Bridging Lab Metrics and Real-World Validation
by Nicolas Caron, Hassan N. Noura, Lise Nakache, Christophe Guyeux and Benjamin Aynes
AI 2025, 6(10), 253; https://doi.org/10.3390/ai6100253 - 1 Oct 2025
Abstract
Artificial intelligence (AI) offers several opportunities in wildfire management, particularly for improving short- and long-term fire occurrence forecasting, spread modeling, and decision-making. When properly adapted beyond research into real-world settings, AI can significantly reduce risks to human life, as well as ecological and [...] Read more.
Artificial intelligence (AI) offers several opportunities in wildfire management, particularly for improving short- and long-term fire occurrence forecasting, spread modeling, and decision-making. When properly adapted beyond research into real-world settings, AI can significantly reduce risks to human life, as well as ecological and economic damages. However, despite increasingly sophisticated research, the operational use of AI in wildfire contexts remains limited. In this article, we review the main domains of wildfire management where AI has been applied—susceptibility mapping, prediction, detection, simulation, and impact assessment—and highlight critical limitations that hinder practical adoption. These include challenges with dataset imbalance and accessibility, the inadequacy of commonly used metrics, the choice of prediction formats, and the computational costs of large-scale models, all of which reduce model trustworthiness and applicability. Beyond synthesizing existing work, our survey makes four explicit contributions: (1) we provide a reproducible taxonomy supported by detailed dataset tables, emphasizing both the reliability and shortcomings of frequently used data sources; (2) we propose evaluation guidance tailored to imbalanced and spatial tasks, stressing the importance of using accurate metrics and format; (3) we provide a complete state of the art, highlighting important issues and recommendations to enhance models’ performances and reliability from susceptibility to damage analysis; (4) we introduce a deployment checklist that considers cost, latency, required expertise, and integration with decision-support and optimization systems. By bridging the gap between laboratory-oriented models and real-world validation, our work advances prior reviews and aims to strengthen confidence in AI-driven wildfire management while guiding future research toward operational applicability. Full article
Show Figures

Figure 1

17 pages, 3413 KB  
Article
Determination of Coal and Biomass Co-Combustion Process States Using Convolutional Neural Networks
by Andrzej Kotyra and Konrad Gromaszek
Energies 2025, 18(19), 5219; https://doi.org/10.3390/en18195219 - 1 Oct 2025
Abstract
The paper presents the application of high-speed flame imaging combined with convolutional neural networks (CNNs) for determining different states of biomass–coal co-combustion in terms of thermal power and excess air coefficient. The experimental setup and methodology used in a laboratory-scale co-combustion system are [...] Read more.
The paper presents the application of high-speed flame imaging combined with convolutional neural networks (CNNs) for determining different states of biomass–coal co-combustion in terms of thermal power and excess air coefficient. The experimental setup and methodology used in a laboratory-scale co-combustion system are described, highlighting tests conducted across nine defined operational variants. The performance of several state-of-the-art CNN architectures was examined, focusing particularly on those achieving the highest classification metrics and exploring the dependence of input image resolution and applying a transfer learning paradigm. By benchmarking various CNNs on a large, diverse image dataset without preprocessing, the research advances intelligent, automated control systems for improved stability, efficiency, and emissions control, bridging advanced visual diagnostics with real-time industrial applications. The summary includes recommendations and potential directions for further research related to the use of image data and machine learning techniques in industry. Full article
(This article belongs to the Special Issue Optimization of Efficient Clean Combustion Technology: 2nd Edition)
Show Figures

Figure 1

20 pages, 8772 KB  
Article
An Assessment of the Applicability of ERA5 Reanalysis Boundary Layer Data Against Remote Sensing Observations in Mountainous Central China
by Jinyu Wang, Zhe Li, Yun Liang and Jiaying Ke
Atmosphere 2025, 16(10), 1152; https://doi.org/10.3390/atmos16101152 - 1 Oct 2025
Abstract
The precision of ERA5 reanalysis datasets and their applicability in the mountainous regions of central China are essential for weather forecasting and climate change research in the transitional zone between northern and southern China. This study employs three months of continuous measurements collected [...] Read more.
The precision of ERA5 reanalysis datasets and their applicability in the mountainous regions of central China are essential for weather forecasting and climate change research in the transitional zone between northern and southern China. This study employs three months of continuous measurements collected from a high-precision remote sensing platform located in a representative mountainous valley (Xinyang city) in central China, spanning December 2024 to February 2025. Our findings indicate that both horizontal and vertical wind speeds from the ERA5 dataset exhibit diminishing deviations as altitude increases. Significant biases are observed below 500 m, with horizontal mean wind speed deviations ranging from −4 to −3 m/s and vertical mean wind speed deviations falling between 0.1 and 0.2 m/s. Conversely, minimal biases are noted near the top of the boundary layer. Both ERA5 and observations reveal a dominance of northeasterly and southwesterly winds at near-surface levels, which aligns with the valley orientation. This underscores the substantial impact of heterogeneous mountainous terrain on the low-level dynamic field. At an altitude of 1000 m, both datasets present similar frequency patterns, with peak frequencies of approximately 15%; however, notable discrepancies in peak wind directions are evident (north–northeast for observations and north–northwest for ERA5). In contrast to dynamic variables, ERA5 temperature deviations are centered around 0 K within the lower layers (0–500 m) but show a slight increase, varying from around 0 K to 6.8 K, indicating an upward trend in deviation with altitude. Similarly, relative humidity (RH) demonstrates an increasing bias with altitude, although its representation of moisture variability remains insufficient. During a typical cold event, substantial deviations in multiple ERA5 variables highlight the needs for further improvements. The integration of machine learning techniques and mathematical correction algorithms is strongly recommended as a means to enhance the accuracy of ERA5 data under such extreme conditions. These findings contribute to a deeper understanding of the use of ERA5 datasets in the mountainous areas of central China and offer reliable scientific references for weather forecasting and climate modelings in these areas. Full article
(This article belongs to the Special Issue Data Analysis in Atmospheric Research)
Show Figures

Figure 1

22 pages, 1282 KB  
Article
Balancing Privacy and Accuracy in Healthcare AI: Federated Learning with AutoML for Blood Pressure Prediction
by Suhyeon Kim, Kyoung Jun Lee, Taekyung Kim and Arum Park
Appl. Sci. 2025, 15(19), 10624; https://doi.org/10.3390/app151910624 - 30 Sep 2025
Abstract
The widening gap between life expectancy and healthy life years underscores the need for scalable, adaptive, and privacy-conscious healthcare solutions. In this study, we integrate the AMPER (Aim–Measure–Predict–Evaluate–Recommend) framework with Bidirectional Encoder Representations from Transformers (BERT), Automated Machine Learning (AutoML), and privacy-preserving Federated [...] Read more.
The widening gap between life expectancy and healthy life years underscores the need for scalable, adaptive, and privacy-conscious healthcare solutions. In this study, we integrate the AMPER (Aim–Measure–Predict–Evaluate–Recommend) framework with Bidirectional Encoder Representations from Transformers (BERT), Automated Machine Learning (AutoML), and privacy-preserving Federated Learning (FL) to deliver personalized hypertension management. Building on sequential data modeling and privacy-preserving AI, we apply this framework to the MIMIC-III dataset, using key variables—gender, age, systolic blood pressure (SBP), and body mass index (BMI)—to forecast future SBP values. Experimental results show that combining BERT with Moving Average (MA) or AutoRegressive Integrated Moving Average (ARIMA) models improves predictive accuracy, and that personalized FL (Per-FedAvg) significantly outperforms local models while maintaining data confidentiality. However, FL performance remains lower than direct data sharing, revealing a trade-off between accuracy and privacy. These findings demonstrate the feasibility of integrating AutoML, advanced sequence modeling, and FL within a structured health management framework. We conclude by discussing theoretical, clinical, and ethical implications, and outline directions for enhancing personalization, multimodal integration, and cross-institutional scalability. Full article
18 pages, 898 KB  
Article
TimeWeaver: Time-Aware Sequential Recommender System via Dual-Stream Temporal Network
by Yang Liu, Tao Wang and Yan Ma
Systems 2025, 13(10), 857; https://doi.org/10.3390/systems13100857 - 29 Sep 2025
Abstract
Recommender systems are data-driven tools designed to assist or automate users’ decision-making. With the growing demand of personalized sequential recommendations in business intelligence or e-commerce, effectively capturing temporal information from massive user-sequence data has become a crucial challenge. State-of-the-art attention-based models often struggle [...] Read more.
Recommender systems are data-driven tools designed to assist or automate users’ decision-making. With the growing demand of personalized sequential recommendations in business intelligence or e-commerce, effectively capturing temporal information from massive user-sequence data has become a crucial challenge. State-of-the-art attention-based models often struggle to balance performance with computational cost, while traditional convolutional neural networks suffer from limited receptive fields and rigid architectures that inadequately model dynamic user interests. To address these limitations, this paper proposes TimeWeaver, a time-aware dual-stream network for sequential recommendation, whose core innovations comprise three key components. First, it employs a re-parameterized large-kernel convolution to expand the effective receptive field. Second, we design a Time-Aware Augmentation mechanism that integrates inter-event time-interval information into positional encodings of items. This allows it to perceive the temporal dynamics of user behavior. Finally, we propose a dual-stream architecture to jointly capture dependencies across different time scales. The context stream employs a modern Temporal Convolutional Network (TCN) structure to strengthen the memorization of users’ medium- and long-term interests. In parallel, the dynamic stream leverages an Exponential Moving Average (EMA) mechanism to weight recent behaviors for sensitively capturing users’ immediate interests. This dual-stream design allows TimeWeaver to comprehensively extract both long- and short-term sequential features. Extensive experiments on three public e-commerce datasets demonstrate TimeWeaver’s superiority. Compared to the strongest baseline model, TimeWeaver achieves average relative improvements of 4.62%, 9.59%, and 4.59% across all metrics on the Beauty, Sports, and Toys datasets, respectively. Full article
(This article belongs to the Special Issue Data-Driven Insights with Predictive Marketing Analysis)
Show Figures

Figure 1

11 pages, 222 KB  
Perspective
Oculoplastics and Augmented Intelligence: A Literature Review
by Edsel Ing and Mostafa Bondok
J. Clin. Med. 2025, 14(19), 6875; https://doi.org/10.3390/jcm14196875 - 28 Sep 2025
Abstract
Artificial intelligence (AI) and augmented intelligence have significant potential in oculoplastics, offering tools for diagnosis, treatment recommendations, and administrative efficiency. This article discusses current and potential applications of AI in ptosis, eyelid and conjunctival cancer, thyroid-associated orbitopathy (TAO), giant cell arteritis (GCA), and [...] Read more.
Artificial intelligence (AI) and augmented intelligence have significant potential in oculoplastics, offering tools for diagnosis, treatment recommendations, and administrative efficiency. This article discusses current and potential applications of AI in ptosis, eyelid and conjunctival cancer, thyroid-associated orbitopathy (TAO), giant cell arteritis (GCA), and orbital fractures. AI-based programs can assist in screening, predicting surgical outcomes, and improving patient care through data-driven decisions. Privacy concerns, particularly with the use of facial and ocular photographs, require robust solutions, including blockchain, federated learning and steganography. Large generalizable datasets with adequate validation are crucial for future AI development. While AI can assist in clinical decision-making and administrative tasks, physician oversight remains critical to prevent potential errors. Large language models like ChatGPT also have the potential to counsel patients, although further validation is needed to ensure accuracy and patient safety. Ultimately, AI should be regarded as an augmentative tool that supports, rather than replaces, physician expertise in oculoplastic care. Full article
(This article belongs to the Special Issue Augmented and Artificial Intelligence in Ophthalmology)
15 pages, 1632 KB  
Article
Plastid RNA Editing in Glycyrrhiza uralensis: Landscape Characterization and Comparative Assessment of RNA-Seq Library Strategies for Detection
by Hui Ma, Yixuan Rao, Yinxiao Lu, Na Fang, Yijia Huang and Lei Gong
Genes 2025, 16(10), 1142; https://doi.org/10.3390/genes16101142 - 26 Sep 2025
Abstract
Background: Plastid RNA editing is widespread in angiosperms yet remains underexplored in the medicinal non-model species Glycyrrhiza uralensis. This study aimed to (i) comprehensively identify plastid RNA editing sites in G. uralensis, and (ii) compare the detection performance of three library [...] Read more.
Background: Plastid RNA editing is widespread in angiosperms yet remains underexplored in the medicinal non-model species Glycyrrhiza uralensis. This study aimed to (i) comprehensively identify plastid RNA editing sites in G. uralensis, and (ii) compare the detection performance of three library construction strategies: total RNA-seq, rRNA-depleted RNA-seq, and mRNA-seq. Methods: Leaf tissue was used from three wild-sampled individual plants. Plastomes were assembled with GetOrganelle v1.7.0 and annotated using PGA. Strand-specific RNA-seq libraries were mapped to sample-matched plastomes using HISAT2 v2.2.1. Variants were identified using REDItools v2.0 under uniform thresholds. Candidate sites were visually verified in IGV v2.12.3, and read origins were confirmed by BLAST v2.13.0+; artifacts were removed via strand-specific filtering. Results: After stringent filtering, 38 high-confidence RNA editing sites were identified across 19 genes. Total RNA seq performed the best, detecting 37/38 sites consistently, whereas rRNA-depleted libraries detected fewer genuine sites and produced numerous rRNA-linked, noncanonical, noncoding-strand-dominant artifacts. Despite very low rates of plastid mapping, mRNA seq recovered a large fraction of bona fide sites under stringent, strand-aware filtering. Conclusions: We establish a set of 38 high-confidence plastid RNA editing sites in G. uralensis and suggest potential adaptive implications of editing in ndh-related genes. Methodologically, total RNA-seq is recommended for identification using de novo RNA editing due to its high sensitivity and low false-positive rate; publicly available poly(A)-selected mRNA-seq datasets can be repurposed to reliably retrieve plastid RNA editing sites when stringent strand-specific filtering is applied. Full article
(This article belongs to the Section Plant Genetics and Genomics)
Show Figures

Figure 1

35 pages, 3077 KB  
Article
A New G Family: Properties, Characterizations, Different Estimation Methods and PORT-VaR Analysis for U.K. Insurance Claims and U.S. House Prices Data Sets
by Ahmad M. AboAlkhair, G. G. Hamedani, Nazar Ali Ahmed, Mohamed Ibrahim, Mohammad A. Zayed and Haitham M. Yousof
Mathematics 2025, 13(19), 3097; https://doi.org/10.3390/math13193097 - 26 Sep 2025
Abstract
This paper introduces a new class of probability distributions, termed the generated log exponentiated polynomial (GLEP) family, designed to enhance flexibility in modeling complex real financial data. The proposed family is constructed through a novel cumulative distribution function that combines logarithmic and exponentiated [...] Read more.
This paper introduces a new class of probability distributions, termed the generated log exponentiated polynomial (GLEP) family, designed to enhance flexibility in modeling complex real financial data. The proposed family is constructed through a novel cumulative distribution function that combines logarithmic and exponentiated polynomial structures, allowing for rich distributional shapes and tail behaviors. We present comprehensive mathematical properties, including useful series expansions for the density, cumulative, and quantile functions, which facilitate the derivation of moments, generating functions, and order statistics. Characterization results based on the reverse hazard function and conditional expectations are established. The model parameters are estimated using various frequentist methods, including Maximum Likelihood Estimation (MLE), Cramer–von Mises (CVM), Anderson–Darling (ADE), Right Tail Anderson–Darling (RTADE), and Left Tail Anderson–Darling (LEADE), with a comparative simulation study assessing their performance. Risk analysis is conducted using actuarial key risk indicators (KRIs) such as Value-at-Risk (VaR), Tail Value-at-Risk (TVaR), Tail Variance (TV), Tail Mean Variance (TMV), and excess function (EL), demonstrating the model’s applicability in financial and insurance contexts. The practical utility of the GLEP family is illustrated through applications to real and simulated datasets, including house price dynamics and insurance claim sizes. Peaks Over Random Threshold Value-at-Risk (PORT-VaR) analysis is applied to U.K. motor insurance claims and U.S. house prices datasets. Some recommendations are provided. Finally, a comparative study is presented to prove the superiority of the new family. Full article
(This article belongs to the Special Issue Statistical Methods for Forecasting and Risk Analysis)
Show Figures

Figure 1

20 pages, 1447 KB  
Article
Foreign Finance and Renewable Energy Transition in D8 Countries: The Moderating Role of Globalization
by Nesrine Gafsi
J. Risk Financial Manag. 2025, 18(10), 545; https://doi.org/10.3390/jrfm18100545 - 25 Sep 2025
Abstract
This study looks at the role of foreign finance in promoting the shift to renewable energy in the Developing-8 (D8) countries—Bangladesh, Egypt, Indonesia, Iran, Malaysia, Nigeria, Pakistan, and Turkey—between 2000 and 2023, with particular focus given to the moderating role of globalization. Utilizing [...] Read more.
This study looks at the role of foreign finance in promoting the shift to renewable energy in the Developing-8 (D8) countries—Bangladesh, Egypt, Indonesia, Iran, Malaysia, Nigeria, Pakistan, and Turkey—between 2000 and 2023, with particular focus given to the moderating role of globalization. Utilizing an unbalanced panel dataset covering eight D8 countries over 2000–2023 and applying advanced econometric techniques, including System-GMM, Common Correlated Effects, nd Driscoll–Kraay estimators, the analysis accounts for slope heterogeneity, cross-sectional dependence, and possible endogeneity. The results indicate that foreign finance, and particularly foreign direct investment (FDI), is highly significant in enhancing the supply and demand of renewable energy. Globalization also has an amplification effect as it spurs technology transfer, policy convergence, and market access. The combined impact of foreign finance and globalization is significant and positive in all specifications, indicating that the optimal benefits of foreign capital inflows are realized in highly integrated economies. Alternative globalization measures and tests of renewable energy robustness confirm the stability of the findings. It argues that institutionally reinforcing the foundations, strengthening global integration, and channeling foreign finance into green sectors are central policies for fostering renewable energy transitions in developing economies. This paper provides three contributions to the existing literature. First, it is the pioneering paper that examines systematically the moderating function of globalization on the foreign finance–renewable energy transition nexus in the D8 economies. Second, it applies the latest econometric techniques—System-GMM, CCE, and Driscoll–Kraay—that control for slope heterogeneity, cross-sectional dependence, and endogeneity. Third, it offers policy recommendations for emerging economies on how best to mobilize foreign finance in a globalization context. Unlike prior works that examine these dimensions separately, this study highlights their joint influence, thereby contributing a dual perspective that has been largely absent from the literature. Full article
(This article belongs to the Section Economics and Finance)
Show Figures

Figure 1

Back to TopTop