Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (64)

Search Parameters:
Keywords = widely more generalized hybrid mappings

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 3779 KB  
Article
Assessing Climate Change Impacts on Future Precipitation Using Random Forest Statistical Downscaling of CMIP6 HadGEM3 Projections in the Büyük Menderes Basin
by Ismail Ara, Mutlu Yasar and Gurhan Gurarslan
Water 2026, 18(2), 277; https://doi.org/10.3390/w18020277 - 21 Jan 2026
Viewed by 215
Abstract
Climate change increasingly threatens the sustainability of regional water resources; therefore, robust station-scale precipitation projections are essential for basin-level planning. This study aims to develop and evaluate a hybrid, machine-learning-based statistical downscaling framework to generate monthly precipitation projections for the 21st century in [...] Read more.
Climate change increasingly threatens the sustainability of regional water resources; therefore, robust station-scale precipitation projections are essential for basin-level planning. This study aims to develop and evaluate a hybrid, machine-learning-based statistical downscaling framework to generate monthly precipitation projections for the 21st century in the Büyük Menderes Basin, western Türkiye, using the HadGEM3-GC31-LL global climate model from the CMIP6. Monthly observations from 23 rainfall observation stations and ERA5 reanalysis predictors were employed to train station-specific Random Forest (RF) models, with optimal predictor sets identified through a multistage selection procedure (MPSP). Coarse-resolution general circulation model (GCM) fields were harmonized with ERA5 data using a three-stage inverse distance weighting (IDW), Delta, and Variance rescaling approach. The downscaled projections were bias-corrected using Quantile Delta Mapping (QDM) to maintain the climate-change signal. The RF models exhibited strong predictive skill across most stations, with test Nash–Sutcliffe Efficiency (NSE) values ranging from 0.45 to 0.81, RSR values from 0.43 to 0.74, and PBIAS values from −21.99% to +5.29%. Future projections indicate a basin-wide drying trend under both scenarios. Relative to the baseline, mean annual precipitation is projected to decrease by approximately 12.2, 19.6, and 33.7 mm in the near (2025–2050), mid (2051–2075), and late (2076–2099) periods under SSP2-4.5 (Shared Socioeconomic Pathway 2-4.5, a moderate greenhouse gas scenario). Under the high-emission SSP5-8.5 scenario, projected decreases are 25.2, 53.2, and 86.9 mm, respectively. Late-century reductions reach approximately 15–22% in several sub-basins. These findings indicate a substantial decline in future water availability and underscore the value of RF-based hybrid downscaling and trend-preserving bias correction for water resources planning in semi-arid Mediterranean basins. Full article
(This article belongs to the Special Issue Climate Change Adaptation in Water Resource Management)
Show Figures

Figure 1

34 pages, 6023 KB  
Article
Multi-Dimensional Evaluation of Auto-Generated Chain-of-Thought Traces in Reasoning Models
by Luis F. Becerra-Monsalve, German Sanchez-Torres and John W. Branch-Bedoya
AI 2026, 7(1), 35; https://doi.org/10.3390/ai7010035 - 21 Jan 2026
Viewed by 217
Abstract
Automatically generated chains-of-thought (gCoTs) have become common as large language models adopt deliberative behaviors. Prior work emphasizes fidelity to internal processes, leaving explanatory properties underexplored. Our central hypothesis is that these traces, produced by highly capable reasoning models, are not arbitrary by-products of [...] Read more.
Automatically generated chains-of-thought (gCoTs) have become common as large language models adopt deliberative behaviors. Prior work emphasizes fidelity to internal processes, leaving explanatory properties underexplored. Our central hypothesis is that these traces, produced by highly capable reasoning models, are not arbitrary by-products of decoding but exhibit stable and practically valuable textual properties beyond answer fidelity. We apply a multidimensional text-evaluation framework that quantifies four axes—structural coherence, logical–factual consistency, linguistic clarity, and coverage/informativeness—that are standard dimensions for assessing textual quality, and use it to evaluate five reasoning models on the GSM8K arithmetic word-problem benchmark (~1.3 k–1.4 k items) with reproducible, normalized metrics. Logical verification shows near-ceiling self-consistency, measured by the Aggregate Consistency Score (ACS ≈ 0.95–1.00), and high final-answer entailment, measured by Final Answer Soundness (FAS0 ≈ 0.85–1.00); when sound, justifications are compact, with Justification Set Size (JSS ≈ 0.51–0.57) and moderate redundancy, measured by the Redundant Constraint Ratio (RCR ≈ 0.62–0.70). Results also show consistent coherence and clarity; from gCoT to answer implication is stricter than from question to gCoT support, indicating chains anchored to the prompt. We find no systematic trade-off between clarity and informativeness (within-model slopes ≈ 0). In addition to these automatic and logic-based metrics, we include an exploratory expert rating of a subset (four raters; 50 items × five models) to contextualize model differences; these human judgments are not intended to support dataset-wide generalization. Overall, gCoTs display explanatory value beyond fidelity, primarily supported by the automated and logic-based analyses, motivating hybrid evaluation (automatic + exploratory human) to map convergence/divergence zones for user-facing applications. Full article
Show Figures

Figure 1

17 pages, 1796 KB  
Article
Optical Genome Mapping Enhances Structural Variant Detection and Refines Risk Stratification in Chronic Lymphocytic Leukemia
by Soma Roy Chakraborty, Michelle A. Bickford, Narcisa A. Smuliac, Kyle A. Tonseth, Jing Bao, Farzana Murad, Irma G. Domínguez Vigil, Heather B. Steinmetz, Lauren M. Wainman, Parth Shah, Elizabeth M. Bengtson, Swaroopa PonnamReddy, Gabriella A. Harmon, Liam L. Donnelly, Laura J. Tafe, Jeremiah X. Karrs, Prabhjot Kaur and Wahab A. Khan
Genes 2026, 17(1), 106; https://doi.org/10.3390/genes17010106 - 19 Jan 2026
Viewed by 310
Abstract
Background: Optical genome mapping (OGM) detects genome-wide structural variants (SVs), including balanced rearrangements and complex copy-number alterations beyond standard-of-care cytogenomic assays. In chronic lymphocytic leukemia (CLL), cytogenetic and genomic risk stratification is traditionally based on fluorescence in situ hybridization (FISH), karyotyping, targeted next-generation [...] Read more.
Background: Optical genome mapping (OGM) detects genome-wide structural variants (SVs), including balanced rearrangements and complex copy-number alterations beyond standard-of-care cytogenomic assays. In chronic lymphocytic leukemia (CLL), cytogenetic and genomic risk stratification is traditionally based on fluorescence in situ hybridization (FISH), karyotyping, targeted next-generation sequencing (NGS), and immunogenetic assessment of immunoglobulin heavy chain variable region (IGHV) somatic hypermutation status, each of which interrogates only a limited aspect of disease biology. Methods: We retrospectively evaluated fifty patients with CLL using OGM and integrated these findings with cytogenomics, targeted NGS, IGHV mutational status, and clinical time-to-first-treatment (TTFT) data. Structural variants were detected using OGM and pathogenic NGS variants were derived from a clinical heme malignancy panel. Clinical outcomes were extracted from the electronic medical record. Results: OGM identified reportable structural variants in 82% (41/50) of cases. The most frequent abnormality was del(13q), observed in 29/50 (58%) and comprising 73% (29/40) of all OGM-detected deletions with pathologic significance. Among these, 12/29 (42%) represented large RB1-spanning deletions, while 17/29 (58%) were focal deletions restricted to the miR15a/miR16-1 minimal region, mapping to the non-coding host gene DLEU2. Co-occurrence of adverse lesions, including deletion 11q/ATM, BIRC3 loss, trisomy 12, and deletion 17p/TP53, were recurrent and strongly associated with shorter TTFT. OGM also uncovered multiple cryptic rearrangements involving chromosomal loci that are not represented in the canonical CLL FISH probe panel, including IGL::CCND1, IGH::BCL2, IGH::BCL11A, IGH::BCL3, and multi-chromosomal copy-number complexity. IGHV data were available in 37/50 (74%) of patients; IGHV-unmutated status frequently co-segregated with OGM-defined high-risk profiles (del(11q), del(17p), trisomy 12 with secondary hits, and complex genomes whereas mutated IGHV predominated in OGM-negative or structurally simple del(13q) cases and aligned with indolent TTFT. Integration of OGM with NGS further improved genomic risk classification, particularly in cases with discordant or inconclusive routine testing. Conclusions: OGM provides a comprehensive, genome-wide view of structural variation in CLL, resolving deletion architecture, identifying cryptic translocations, and defining complex multi-hit genomic profiles that tracked closely with clinical behavior. Combining OGM and NGS analysis refined risk stratification beyond standard FISH panels and supports more precise, individualized management strategies in CLL. Prospective studies are warranted to evaluate the clinical utility of OGM-guided genomic profiling in contemporary treatment paradigms. Full article
Show Figures

Figure 1

35 pages, 3598 KB  
Article
PlanetScope Imagery and Hybrid AI Framework for Freshwater Lake Phosphorus Monitoring and Water Quality Management
by Ying Deng, Daiwei Pan, Simon X. Yang and Bahram Gharabaghi
Water 2026, 18(2), 261; https://doi.org/10.3390/w18020261 - 19 Jan 2026
Viewed by 208
Abstract
Accurate estimation of Total Phosphorus, referred to as “Phosphorus, Total” (PPUT; µg/L) in the sourced monitoring data, is essential for understanding eutrophication dynamics and guiding water-quality management in inland lakes. However, lake-wide PPUT mapping at high resolution is challenging to achieve using conventional [...] Read more.
Accurate estimation of Total Phosphorus, referred to as “Phosphorus, Total” (PPUT; µg/L) in the sourced monitoring data, is essential for understanding eutrophication dynamics and guiding water-quality management in inland lakes. However, lake-wide PPUT mapping at high resolution is challenging to achieve using conventional in-situ sampling, and nearshore gradients are often poorly resolved by medium- or low-resolution satellite sensors. This study exploits multi-generation PlanetScope imagery (Dove Classic, Dove-R, and SuperDove; 3–5 m, near-daily revisit) to develop a hybrid AI framework for PPUT retrieval in Lake Simcoe, Ontario, Canada. PlanetScope surface reflectance, short-term meteorological descriptors (3 to 7-day aggregates of air temperature, wind speed, precipitation, and sea-level pressure), and in-situ Secchi depth (SSD) were used to train five ensemble-learning models (HistGradientBoosting, CatBoost, RandomForest, ExtraTrees, and GradientBoosting) across eight feature-group regimes that progressively extend from bands-only, to combinations with spectral indices and day-of-year (DOY), and finally to SSD-inclusive full-feature configurations. The inclusion of SSD led to a strong and systematic performance gain, with mean R2 increasing from about 0.67 (SSD-free) to 0.94 (SSD-aware), confirming that vertically integrated optical clarity is the dominant constraint on PPUT retrieval and cannot be reconstructed from surface reflectance alone. To enable scalable SSD-free monitoring, a knowledge-distillation strategy was implemented in which an SSD-aware teacher transfers its learned representation to a student using only satellite and meteorological inputs. The optimal student model, based on a compact subset of 40 predictors, achieved R2 = 0.83, RMSE = 9.82 µg/L, and MAE = 5.41 µg/L, retaining approximately 88% of the teacher’s explanatory power. Application of the student model to PlanetScope scenes from 2020 to 2025 produces meter-scale PPUT maps; a 26 July 2024 case study shows that >97% of the lake surface remains below 10 µg/L, while rare (<1%) but coherent hotspots above 20 µg/L align with tributary mouths and narrow channels. The results demonstrate that combining commercial high-resolution imagery with physics-informed feature engineering and knowledge transfer enables scalable and operationally relevant monitoring of lake phosphorus dynamics. These high-resolution PPUT maps enable lake managers to identify nearshore nutrient hotspots, tributary plume structures. In doing so, the proposed framework supports targeted field sampling, early warning for eutrophication events, and more robust, lake-wide nutrient budgeting. Full article
(This article belongs to the Section Water Quality and Contamination)
Show Figures

Figure 1

18 pages, 560 KB  
Review
Melanoma in Primary Care: A Narrative Review of Training Interventions and the Role of Telemedicine in Medical Education
by Ignazio Stanganelli, Edoardo Mora, Debora Cantagalli, Serena Magi, Laura Mazzoni, Matelda Medri, Cesare Massone, Davide Melandri, Federica Zamagni, Ines Zanna, Gianluca Pistore, Saverio Caini, Salvatore Amato, Vincenzo De Giorgi, Pietro Quaglino, Maria Antonietta Pizzichetta, Giovanni Luigi Tripepi, Giorgia Ravaglia and Sofia Spagnolini
Curr. Oncol. 2025, 32(9), 522; https://doi.org/10.3390/curroncol32090522 - 18 Sep 2025
Viewed by 1314
Abstract
General practitioners play a crucial role in the early detection and prevention of cutaneous melanoma. However, structured training on skin cancer diagnosis and management is often lacking. This narrative review aims to map the current educational interventions for general practitioners focused on melanoma, [...] Read more.
General practitioners play a crucial role in the early detection and prevention of cutaneous melanoma. However, structured training on skin cancer diagnosis and management is often lacking. This narrative review aims to map the current educational interventions for general practitioners focused on melanoma, assess their methodological approaches and outcomes, and explore the contribution of e-learning and telemedicine in medical education. A comprehensive literature search identified 54 relevant studies published between 1 January 1995 and 31 December 2024. Data were extracted and categorized by topics covered, training methodology, interactivity, and clinical outcomes. Training programs varied widely in duration, delivery, and content. Interventions that integrated dermoscopy and interactive methodologies demonstrated improved diagnostic accuracy and clinical impact. E-learning, particularly asynchronous models, emerged as a flexible and effective modality, although few studies evaluated long-term retention or clinical practice changes. Educational programs tailored to general practitioners and enriched with dermoscopy and telemedicine tools show promise in improving melanoma detection and care. Structured, interactive, and blended/hybrid learning models should be prioritized to support effective primary and secondary prevention. Full article
(This article belongs to the Special Issue Advances in Melanoma: From Pathogenesis to Personalized Therapy)
Show Figures

Graphical abstract

22 pages, 8021 KB  
Article
Multi-Task Semi-Supervised Approach for Counting Cones in Adaptive Optics Images
by Vidya Bommanapally, Amir Akhavanrezayat, Parvathi Chundi, Quan Dong Nguyen and Mahadevan Subramaniam
Algorithms 2025, 18(9), 552; https://doi.org/10.3390/a18090552 - 2 Sep 2025
Viewed by 796
Abstract
Counting and density estimation of cone cells using adaptive optics (AO) imaging plays an important role in the clinical management of retinal diseases. A novel deep learning approach for the cone counting task with minimal manual labeling of cone cells in AO images [...] Read more.
Counting and density estimation of cone cells using adaptive optics (AO) imaging plays an important role in the clinical management of retinal diseases. A novel deep learning approach for the cone counting task with minimal manual labeling of cone cells in AO images is described in this paper. We propose a hybrid multi-task semi-supervised learning (MTSSL) framework that simultaneously trains on unlabeled and labeled data. On the unlabeled images, the model learns structural and relational features by employing two self-supervised pretext tasks—image inpainting (IP) and learning-to-rank (L2R). At the same time, it leverages a small set of labeled examples to supervise a density estimation head for cone counting. By jointly minimizing the image reconstruction loss, the ranking loss, and the supervised density-map loss, our approach harnesses the rich information in unlabeled data to learn feature representations and directly incorporates ground-truth annotations to guide accurate density prediction and counts. Experiments were conducted on a dataset of AO images of 120 subjects captured using a device with a retinal camera (rtx1) with a wide field-of-view. MTSSL gains strengths from hybrid self-supervised pretext tasks of generative and predictive pretraining that aid in learning global and local context required for counting cones. The results show that the proposed MTSSL approach significantly outperforms the individual self-supervised pipelines with an RMSE score improved by a factor of 2 for cone counting. Full article
(This article belongs to the Special Issue Advanced Machine Learning Algorithms for Image Processing)
Show Figures

Figure 1

26 pages, 656 KB  
Review
Advancing Flood Detection and Mapping: A Review of Earth Observation Services, 3D Data Integration, and AI-Based Techniques
by Tommaso Destefanis, Sona Guliyeva, Piero Boccardo and Vanina Fissore
Remote Sens. 2025, 17(17), 2943; https://doi.org/10.3390/rs17172943 - 25 Aug 2025
Cited by 2 | Viewed by 6136
Abstract
Floods are among the most frequent and damaging hazards worldwide, with impacts intensified by climate change and rapid urban growth. This review analyzes how satellite-based Earth Observation (EO) technologies are evolving to meet operational needs in flood detection and water depth estimation, with [...] Read more.
Floods are among the most frequent and damaging hazards worldwide, with impacts intensified by climate change and rapid urban growth. This review analyzes how satellite-based Earth Observation (EO) technologies are evolving to meet operational needs in flood detection and water depth estimation, with a focus on the Copernicus Emergency Management Service (CEMS) as a mature and widely adopted European framework. We compare the capabilities of conventional EO datasets—optical and Synthetic Aperture Radar (SAR)—with 3D geospatial datasets such as high-resolution Digital Elevation Models (DEMs) and Light Detection and Ranging (LiDAR). While 2D EO imagery is essential for rapid surface water mapping, 3D datasets add volumetric context, enabling improved flood depth estimation and urban impact assessment. LiDAR, in particular, can capture microtopography between high-rise structures, but its operational use is constrained by cost, data availability, and update frequency. We also review how artificial intelligence (AI), including machine learning and deep learning, is enhancing automation, generalization, and near-real-time processing in flood mapping. Persistent gaps remain in model transferability, uncertainty quantification, and the integration of scarce high-resolution topographic data. We conclude by outlining a roadmap towards hybrid frameworks that combine EO observations, 3D datasets, and physics-informed AI, bridging the gap between current technological capabilities and the demands of real-world emergency management. Full article
Show Figures

Figure 1

15 pages, 1021 KB  
Article
Fine Mapping of Quantitative Trait Loci (QTL) with Resistance to Common Scab in Diploid Potato and Development of Effective Molecular Markers
by Guoqiang Wu and Guanghui Jin
Agronomy 2025, 15(7), 1527; https://doi.org/10.3390/agronomy15071527 - 24 Jun 2025
Viewed by 1249
Abstract
Potato common scab is one of the major diseases posing a threat to potato production on a global scale. No chemical agents have been found to effectively control the occurrence of this disease, and research on the identification of resistance genes and the [...] Read more.
Potato common scab is one of the major diseases posing a threat to potato production on a global scale. No chemical agents have been found to effectively control the occurrence of this disease, and research on the identification of resistance genes and the development of molecular markers remains relatively limited. In this study, a diploid potato variety H535, which exhibits resistance to the predominant pathogen Streptomyces scabies, was utilized as the male parent, whereas the susceptible diploid potato variety H012 served as the female parent. Building upon the resistance QTL intervals pinpointed through a genome-wide association study, two potential resistance loci were localized on chromosome 2 of the potato genome, spanning the regions between 38–38.6 Mb and 41.3–42.7 Mb. These intervals accounted for 18.03% of the total phenotypic variance and are presumed to be the primary QTLs underlying scab resistance. Building upon this foundation, we expanded the hybrid progeny population, conducted resistance assessments, selected individuals with extreme phenotypes, developed molecular markers, and conducted fine mapping of the resistance gene. A phenotypic evaluation of scab resistance was carried out using a pot-based inoculation test on 175 potato hybrid progenies to characterize the F1 generation population. Twenty lines exhibiting high resistance and thirty lines displaying high susceptibility were selected for investigations. Within the preliminary mapping interval on potato chromosome 2 (spanning 38–43 Mb), a total of 214 SSR (Simple Sequence Repeat) and 133 InDel (Insertion/Deletion) primer pairs were designed. Initial screening with parental lines identified 18 polymorphic markers (8 SSR and 10 InDel) that demonstrated stable segregation patterns. Validation using bulked segregant analysis revealed that 3 SSR markers (with 70–90% linkage) and 6 InDel markers (with 70–90% linkage) exhibited significant co-segregation with the resistance trait. A high-density genetic linkage map spanning 104.59 cm was constructed using 18 polymorphic markers, with an average marker spacing of 5.81 cm. Through linkage analysis, the resistance locus was precisely mapped to a 767 kb interval (41.33–42.09 Mb) on potato chromosome 2, flanked by SSR-2-9 and InDel-3-9. Within this refined interval, four candidate disease resistance genes were identified: RHC02H2G2507, RHC02H2G2515, PGSC0003DMG400030643, and PGSC0003DMG400030661. This study offers novel insights into the genetic architecture underlying scab resistance in potato. The high-resolution mapping results and characterized markers will facilitate marker-assisted selection (MAS) in disease resistance breeding programs, providing an efficient strategy for developing cultivars with enhanced resistance to Streptomyces scabies. Full article
(This article belongs to the Section Crop Breeding and Genetics)
Show Figures

Figure 1

19 pages, 885 KB  
Entry
Origins, Styles, and Applications of Text Analytics in Social Science Research
by Konstantinos Zougris
Encyclopedia 2025, 5(2), 70; https://doi.org/10.3390/encyclopedia5020070 - 26 May 2025
Viewed by 2912
Definition
Textual analysis is grounded in conceptual schemes of traditional qualitative and quantitative content analysis techniques that have led to the hybridization of methodological styles widely used across social scientific fields. This paper delivers an extensive review of the origins and evolution of text [...] Read more.
Textual analysis is grounded in conceptual schemes of traditional qualitative and quantitative content analysis techniques that have led to the hybridization of methodological styles widely used across social scientific fields. This paper delivers an extensive review of the origins and evolution of text analysis within the domains of traditional content analysis. Emphasis is given to the conceptual schemas and operational structure of latent semantic analysis, and its capacity to detect topical clusters of large corpora. Further, I describe the operations of Entity–Aspect Sentiment Analysis which are designed to measure and assess sentiments/opinions within specific contextual domains of textual data. Then, I conceptualize and elaborate on the potential of streamlining latent semantic and Entity–Aspect Sentiment Analysis complemented by Correspondence Analysis, generating an integrated operational scheme that would detect the topic structure, assess the contextual sentiment/opinion for each detected topic, test for statistical dependence of sentiments/opinions across topical domains, and graphically display conceptual maps of sentiments in topics space. Full article
(This article belongs to the Collection Encyclopedia of Social Sciences)
Show Figures

Figure 1

35 pages, 8298 KB  
Article
Customer Churn Prediction Based on Coordinate Attention Mechanism with CNN-BiLSTM
by Chaojie Yang, Guoen Xia, Liying Zheng, Xianquan Zhang and Chunqiang Yu
Electronics 2025, 14(10), 1916; https://doi.org/10.3390/electronics14101916 - 8 May 2025
Viewed by 2585
Abstract
Due to increased competition in the marketplace, companies in all industries are facing the problem of customer attrition. In order to expand their market share and increase profits, companies have shifted from the concept of ‘acquiring new customers’ to ‘retaining old customers’. In [...] Read more.
Due to increased competition in the marketplace, companies in all industries are facing the problem of customer attrition. In order to expand their market share and increase profits, companies have shifted from the concept of ‘acquiring new customers’ to ‘retaining old customers’. In this study, we design a deep learning model based on multi-network feature extraction and an attention mechanism, convolutional neural network–bidirectional long and short-term memory network–fully connected layer–coordinate attention (CNN-BiLSTM-FC-CoAttention), and apply it to customer churn risk assessment. In the data preprocessing stage, the imbalanced dataset was processed using the SMOTE-ENN hybrid sampling method. In the feature extraction stage, a sequence-based CNN and time-based BiLSTM are combined to extract the local and time series features of the customer data. In the feature transformation stage, high-level features are extracted using a fully connected layer of 64 Relu neurons and the sequence features are reshaped into matrix features. In the attention enhancement stage, the extracted feature information is refined using a coordinate attention learning module to fully learn the channel and spatial location information of the feature map. To evaluate the performance of the proposed model, we include public datasets from telecom, bank and insurance industries for ten-fold cross-validation experiments, and the results show that the CNN-BiLSTM-FC-CoAttention model outperforms the comparison models in all metrics. Our proposed model improves the accuracy and generalisation of the model prediction by combining multiple algorithms, enabling it to be widely used in multiple industries. As a result, the model gives enterprises a better and more general decision-making reference for the timely identification of potential churn customers. Full article
Show Figures

Figure 1

15 pages, 3190 KB  
Article
ChatGPT in Education: Challenges in Local Knowledge Representation of Romanian History and Geography
by Alexandra Ioanid and Nistor Andrei
Educ. Sci. 2025, 15(4), 511; https://doi.org/10.3390/educsci15040511 - 18 Apr 2025
Cited by 1 | Viewed by 2758
Abstract
The integration of AI tools like ChatGPT in education has sparked debates on their benefits and limitations, particularly in subjects requiring region-specific knowledge. This study examines ChatGPT’s ability to generate accurate and contextually rich responses to assignments in Romanian history and geography, focusing [...] Read more.
The integration of AI tools like ChatGPT in education has sparked debates on their benefits and limitations, particularly in subjects requiring region-specific knowledge. This study examines ChatGPT’s ability to generate accurate and contextually rich responses to assignments in Romanian history and geography, focusing on topics with limited digital representation. Using a document-based analysis, this study compared ChatGPT’s responses to local archival sources, monographs, and topographical maps, assessing coverage, accuracy, and local nuances. Findings indicate significant factual inaccuracies, including misidentified Dacian tribes, incorrect historical sources, and geographic errors such as misplaced landmarks, elevation discrepancies, and incorrect infrastructure details. ChatGPT’s reliance on widely digitized sources led to omissions of localized details, highlighting a fundamental limitation when applied to non-digitized historical and geographic topics. These results suggest that while ChatGPT can be a useful supplementary tool, its outputs require careful verification by educators to prevent misinformation. Future research should explore strategies to improve AI-generated educational content, including better integration of regional archives and AI literacy training for students and teachers. The study underscores the need for hybrid AI-human approaches in education, ensuring that AI-generated text complements rather than replaces verified academic sources. Full article
Show Figures

Figure 1

26 pages, 5355 KB  
Article
Orbital Design Optimization for Large-Scale SAR Constellations: A Hybrid Framework Integrating Fuzzy Rules and Chaotic Sequences
by Dacheng Liu, Yunkai Deng, Sheng Chang, Mengxia Zhu, Yusheng Zhang and Zixuan Zhang
Remote Sens. 2025, 17(8), 1430; https://doi.org/10.3390/rs17081430 - 17 Apr 2025
Cited by 1 | Viewed by 1617
Abstract
Synthetic Aperture Radar (SAR) constellations have become a key technology for disaster monitoring, terrain mapping, and ocean surveillance due to their all-weather and high-resolution imaging capabilities. However, the design of large-scale SAR constellations faces multi-objective optimization challenges, including short revisit cycles, wide coverage, [...] Read more.
Synthetic Aperture Radar (SAR) constellations have become a key technology for disaster monitoring, terrain mapping, and ocean surveillance due to their all-weather and high-resolution imaging capabilities. However, the design of large-scale SAR constellations faces multi-objective optimization challenges, including short revisit cycles, wide coverage, high-performance imaging, and cost-effectiveness. Traditional optimization methods, such as genetic algorithms, suffer from issues like parameter dependency, slow convergence, and the complexity of multi-objective trade-offs. To address these challenges, this paper proposes a hybrid optimization framework that integrates chaotic sequence initialization and fuzzy rule-based decision mechanisms to solve high-dimensional constellation design problems. The framework generates the initial population using chaotic mapping, adaptively adjusts crossover strategies through fuzzy logic, and achieves multi-objective optimization via a weighted objective function. The simulation results demonstrate that the proposed method outperforms traditional algorithms in optimization performance, convergence speed, and robustness. Specifically, the average fitness value of the proposed method across 20 independent runs improved by 40.47% and 35.48% compared to roulette wheel selection and tournament selection, respectively. Furthermore, parameter sensitivity analysis and robustness experiments confirm the stability and superiority of the proposed method under varying parameter configurations. This study provides an efficient and reliable solution for the orbital design of large-scale SAR constellations, offering significant engineering application value. Full article
(This article belongs to the Special Issue Advanced HRWS Spaceborne SAR: System Design and Signal Processing)
Show Figures

Figure 1

16 pages, 1844 KB  
Article
Exploring the Potential of Optical Genome Mapping in the Diagnosis and Prognosis of Soft Tissue and Bone Tumors
by Alejandro Berenguer-Rubio, Esperanza Such, Neus Torres Hernández, Paula González-Rojo, Álvaro Díaz-González, Gayane Avetisyan, Carolina Gil-Aparicio, Judith González-López, Nicolay Pantoja-Borja, Luis Alberto Rubio-Martínez, Soraya Hernández-Girón, María Soledad Valera-Cuesta, Cristina Ramírez-Fuentes, María Simonet-Redondo, Roberto Díaz-Beveridge, Carolina de la Calva, José Vicente Amaya-Valero, Cristina Ballester-Ibáñez, Alessandro Liquori, Francisco Giner and Empar Mayordomo-Arandaadd Show full author list remove Hide full author list
Int. J. Mol. Sci. 2025, 26(6), 2820; https://doi.org/10.3390/ijms26062820 - 20 Mar 2025
Cited by 1 | Viewed by 1909
Abstract
Sarcomas are rare malignant tumors of mesenchymal origin with a high misdiagnosis rate due to their heterogeneity and low incidence. Conventional diagnostic techniques, such as Fluorescence In Situ Hybridization (FISH) and Next-Generation Sequencing (NGS), have limitations in detecting structural variations (SVs), copy number [...] Read more.
Sarcomas are rare malignant tumors of mesenchymal origin with a high misdiagnosis rate due to their heterogeneity and low incidence. Conventional diagnostic techniques, such as Fluorescence In Situ Hybridization (FISH) and Next-Generation Sequencing (NGS), have limitations in detecting structural variations (SVs), copy number variations (CNVs), and predicting clinical behavior. Optical genome mapping (OGM) provides high-resolution genome-wide analysis, improving sarcoma diagnosis and prognosis assessment. This study analyzed 53 sarcoma samples using OGM. Ultra-high molecular weight (UHMW) DNA was extracted from core and resection biopsies, and data acquisition was performed with the Bionano Saphyr platform. Bioinformatic pipelines identified structural variations, comparing them with known alterations for each sarcoma subtype. OGM successfully analyzed 62.3% of samples. Diagnostic-defining alterations were found in 95.2% of cases, refining diagnoses and revealing novel oncogenic and tumor suppressor gene alterations. The challenges included DNA extraction and quality issues from some tissue samples. Despite these limitations, OGM proved to be a powerful diagnostic and predictive tool for bone and soft tissue sarcomas, surpassing conventional methods in resolution and scope, enhancing the understanding of sarcoma genetics, and enabling better patient stratification and personalized therapies. Full article
(This article belongs to the Special Issue Cancer Diagnosis and Treatment: Exploring Molecular Research)
Show Figures

Figure 1

26 pages, 5126 KB  
Article
Deep Reinforcement Learning-Based Impact Angle-Constrained Adaptive Guidance Law
by Zhe Hu, Wenjun Yi and Liang Xiao
Mathematics 2025, 13(6), 987; https://doi.org/10.3390/math13060987 - 17 Mar 2025
Cited by 1 | Viewed by 1694
Abstract
This study presents an advanced second-order sliding-mode guidance law with a terminal impact angle constraint, which ingeniously combines reinforcement learning algorithms with the nonsingular terminal sliding-mode control (NTSM) theory. This hybrid approach effectively mitigates the inherent chattering issue commonly associated with sliding-mode control [...] Read more.
This study presents an advanced second-order sliding-mode guidance law with a terminal impact angle constraint, which ingeniously combines reinforcement learning algorithms with the nonsingular terminal sliding-mode control (NTSM) theory. This hybrid approach effectively mitigates the inherent chattering issue commonly associated with sliding-mode control while maintaining high levels of control system precision. We introduce a parameter to the super-twisting algorithm and subsequently improve an intelligent parameter-adaptive algorithm grounded in the Twin-Delayed Deep Deterministic Policy Gradient (TD3) framework. During the guidance phase, a pre-trained reinforcement learning model is employed to directly map the missile’s state variables to the optimal adaptive parameters, thereby significantly enhancing the guidance performance. Additionally, a generalized super-twisting extended state observer (GSTESO) is introduced for estimating and compensating the lumped uncertainty within the missile guidance system. This method obviates the necessity for prior information about the target’s maneuvers, enabling the proposed guidance law to intercept maneuvering targets with unknown acceleration. The finite-time stability of the closed-loop guidance system is confirmed using the Lyapunov stability criterion. Simulations demonstrate that our proposed guidance law not only meets a wide range of impact angle constraints but also attains higher interception accuracy and faster convergence rate and better overall performance compared to traditional NTSM and the super-twisting NTSM (ST-NTSM) guidance laws, The interception accuracy is less than 0.1 m, and the impact angle error is less than 0.01°. Full article
(This article belongs to the Section E2: Control Theory and Mechanics)
Show Figures

Figure 1

17 pages, 1314 KB  
Article
A Systems Biology Approach for Prioritizing ASD Genes in Large or Noisy Datasets
by Veronica Remori, Heather Bondi, Manuel Airoldi, Lisa Pavinato, Giulia Borini, Diana Carli, Alfredo Brusco and Mauro Fasano
Int. J. Mol. Sci. 2025, 26(5), 2078; https://doi.org/10.3390/ijms26052078 - 27 Feb 2025
Cited by 2 | Viewed by 1604
Abstract
Autism spectrum disorder (ASD) is a complex multifactorial neurodevelopmental disorder. Despite extensive research involving genome-wide association studies, copy number variant (CNV) testing, and genome sequencing, the comprehensive genetic landscape remains incomplete. In this context, we developed a systems biology approach to prioritize genes [...] Read more.
Autism spectrum disorder (ASD) is a complex multifactorial neurodevelopmental disorder. Despite extensive research involving genome-wide association studies, copy number variant (CNV) testing, and genome sequencing, the comprehensive genetic landscape remains incomplete. In this context, we developed a systems biology approach to prioritize genes associated with ASD and uncover potential new candidates. A Protein–Protein Interaction (PPI) network was generated from genes associated to ASD in a public database. Leveraging gene topological properties, particularly betweenness centrality, we prioritized genes and unveiled potential novel candidates (e.g., CDC5L, RYBP, and MEOX2). To test this approach, a list of genes within CNVs of unknown significance, identified through array comparative genomic hybridization analysis in 135 ASD patients, was mapped onto the PPI network. A prioritized gene list was obtained through ranking by betweenness centrality score. Intriguingly, by over-representation analysis, significant enrichments emerged in pathways not strictly linked to ASD, including ubiquitin-mediated proteolysis and cannabinoid receptor signaling, suggesting their potential perturbation in ASD. Our systems biology approach provides a promising strategy for identifying ASD risk genes, especially in large and noisy datasets, and contributes to a deeper understanding of the disorder’s complex genetic basis. Full article
(This article belongs to the Section Molecular Genetics and Genomics)
Show Figures

Figure 1

Back to TopTop