Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,637)

Search Parameters:
Keywords = expert classification

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 2569 KB  
Article
Attribution-Driven Teaching Interventions: Linking I-AHP Weighted Assessment to Explainable Student Clustering
by Yanzheng Liu, Xuan Yang, Ying Zhu, Jin Wang, Mi Zuo, Lei Yang and Lingtong Sun
Algorithms 2025, 18(11), 691; https://doi.org/10.3390/a18110691 (registering DOI) - 1 Nov 2025
Abstract
Student course performance evaluation serves as a critical pedagogical tool for diagnosing learning gaps and enhancing educational outcomes, yet conventional assessments often suffer from rigid single-metric scoring and ambiguous causality. This study proposes an integrated analytic framework addressing these limitations by synergizing pedagogical [...] Read more.
Student course performance evaluation serves as a critical pedagogical tool for diagnosing learning gaps and enhancing educational outcomes, yet conventional assessments often suffer from rigid single-metric scoring and ambiguous causality. This study proposes an integrated analytic framework addressing these limitations by synergizing pedagogical expertise with data-driven diagnostics through four key measure: (1) Interval Analytic Hierarchy Process (I-AHP) to derive criterion weights reflecting instructional priorities via expert judgment; (2) K-means clustering to objectively stratify students into performance cohorts based on multidimensional metrics; (3) Random Forest classification and SHAP value analysis to quantitatively identify key discriminators of cluster membership and interpret decision boundaries; and (4) attribution-guided interventions targeting cohort-specific deficiencies. Leveraging a dual-channel ecosystem across pre-class, in-class, and post-class phases, we established a hierarchical evaluation system where I-AHP weighted pedagogical sub-criteria to generate comprehensive student scores. Full article
26 pages, 5481 KB  
Article
MCP-X: An Ultra-Compact CNN for Rice Disease Classification in Resource-Constrained Environments
by Xiang Zhang, Lining Yan, Belal Abuhaija and Baha Ihnaini
AgriEngineering 2025, 7(11), 359; https://doi.org/10.3390/agriengineering7110359 (registering DOI) - 1 Nov 2025
Abstract
Rice, a dietary staple for over half of the global population, is highly susceptible to bacterial and fungal diseases such as bacterial blight, brown spot, and leaf smut, which can severely reduce yields. Traditional manual detection is labor-intensive and often results in delayed [...] Read more.
Rice, a dietary staple for over half of the global population, is highly susceptible to bacterial and fungal diseases such as bacterial blight, brown spot, and leaf smut, which can severely reduce yields. Traditional manual detection is labor-intensive and often results in delayed intervention and excessive chemical use. Although deep learning models like convolutional neural networks (CNNs) achieve high accuracy, their computational demands hinder deployment in resource-limited agricultural settings. We propose MCP-X, an ultra-compact CNN with only 0.21 million parameters for real-time, on-device rice disease classification. MCP-X integrates a shallow encoder, multi-branch expert routing, a bi-level recurrent simulation encoder–decoder (BRSE), an efficient channel attention (ECA) module, and a lightweight classifier. Trained from scratch, MCP-X achieves 98.93% accuracy on PlantVillage and 96.59% on the Rice Disease Detection Dataset, without external pretraining. Mechanistically, expert routing diversifies feature branches, ECA enhances channel-wise signal relevance, and BRSE captures lesion-scale and texture cues—yielding complementary, stage-wise gains confirmed through ablation studies. Despite slightly higher FLOPs than MobileNetV2, MCP-X prioritizes a minimal memory footprint (~1.01 MB) and deployability over raw speed, running at 53.83 FPS (2.42 GFLOPs) on an RTX A5000. It achieves 16.7×, 287×, 420×, and 659× fewer parameters than MobileNetV2, ResNet152V2, ViT-Base, and VGG-16, respectively. When integrated into a multi-resolution ensemble, MCP-X attains 99.85% accuracy, demonstrating exceptional robustness across controlled and field datasets while maintaining efficiency for real-world agricultural applications. Full article
Show Figures

Figure 1

10 pages, 236 KB  
Review
A Comprehensive Review of 3D Imaging and Printing in Proximal Humerus Fractures and Sequelae
by Roberto de Giovanni, Martina Coppola, Valentina Rossi, Massimo Mariconda and Andrea Cozzolino
J. Clin. Med. 2025, 14(21), 7711; https://doi.org/10.3390/jcm14217711 - 30 Oct 2025
Abstract
Proximal humerus fractures are common and complex; despite advances, malunion, nonunion, and osteonecrosis remain concerns. Three-dimensional (3D) imaging/printing has emerged to improve classification, planning, and execution, especially in displaced patterns. Methods: Multiple databases have been searched using predefined terms (“proximal humerus fractures/sequelae”, “three-dimensional”, [...] Read more.
Proximal humerus fractures are common and complex; despite advances, malunion, nonunion, and osteonecrosis remain concerns. Three-dimensional (3D) imaging/printing has emerged to improve classification, planning, and execution, especially in displaced patterns. Methods: Multiple databases have been searched using predefined terms (“proximal humerus fractures/sequelae”, “three-dimensional”, and “3D printing”). Inclusion criteria targeted human longitudinal studies (retrospective/prospective) on 3D-assisted fracture or sequela management; expert opinion, prior reviews, and letters to editors were excluded. Data extracted included the design, the level of evidence (LoE), the sample size, 3D application (diagnostic, planning, intraoperative, and combined), outcomes, follow-up, and complications. Results: Nineteen studies were included (fourteen fractures and five sequelae; 636 and 28 patients, respectively). In fractures, 3D imaging was used chiefly for preoperative planning (57.1%) and diagnostic support (35.7%); no intraoperative PSI was reported. In sequelae, intraoperative/PSI dominated (100%), with planning in 80% and combined uses in 80%. Fracture studies were mostly retrospective (50.0%; LoE III 78.6%), while all sequelae were LoE IV–V (60% of case reports). Standardized outcomes were reported in 42.1% of studies; follow-up was available in 42.1% (means ≈ 18 months). Complications occurred in 14.3% of fracture studies and in none of the sequelae. Conclusions: Three-dimensional printing is primarily applied for planning in fractures and intraoperative guidance in sequelae. While feasibility and potential perioperative benefits are evident, small heterogeneous cohorts and limited outcome reporting warrant larger prospective studies with standardized endpoints. Full article
(This article belongs to the Special Issue Recent Advances in the Management of Fractures)
42 pages, 8656 KB  
Article
Artificial Intelligence-Based Architectural Design (AIAD): An Influence Mechanism Analysis for the New Technology Using the Hybrid Multi-Criteria Decision-Making Framework
by Xinliang Wang, Yafei Zhao, Wenlong Zhang, Yang Li, Xuepeng Shi, Rong Xia, Yanjun Su, Xiaoju Li and Xiang Xu
Buildings 2025, 15(21), 3898; https://doi.org/10.3390/buildings15213898 - 28 Oct 2025
Viewed by 292
Abstract
Artificial Intelligence (AI) has emerged as a transformative force in the field of architectural design. This study aims to systematically analyze the influence mechanisms of Artificial Intelligence-based Architectural Design (AIAD) by constructing a comprehensive hybrid model that integrates the Analytic Hierarchy Process (AHP), [...] Read more.
Artificial Intelligence (AI) has emerged as a transformative force in the field of architectural design. This study aims to systematically analyze the influence mechanisms of Artificial Intelligence-based Architectural Design (AIAD) by constructing a comprehensive hybrid model that integrates the Analytic Hierarchy Process (AHP), Decision-Making Trial and Evaluation Laboratory (DEMATEL), Interpretive Structural Modeling (ISM), and Cross-Impact Matrix Multiplication Applied to Classification (MICMAC). Based on the previous quantitative literature review, 6 primary categories and 18 secondary influencing factors were identified. Data were collected from a panel of fifteen experts representing the architecture industry, academia, and computer science. Through weighting analysis, causal mapping, hierarchical structuring, and driving–dependence classification, the study clarifies the complex interrelationships among influencing factors and reveals the underlying drivers that accelerate or constrain AI adoption in architectural design. By quantifying the hierarchical and causal influence of factors, this research provides theoretical findings and practical insights for design firms undergoing digital transformation. The results extend previous meta-analytical studies, offering a decision-support system that bridges academic research and real-world applications, thereby guiding stakeholders toward informed adoption of artificial intelligence for future cultural tourism development and regional spatial innovation. Full article
(This article belongs to the Special Issue Artificial Intelligence in Architecture and Interior Design)
Show Figures

Figure 1

17 pages, 11184 KB  
Article
Automated Crack Detection in Micro-CT Scanning for Fiber-Reinforced Concrete Using Super-Resolution and Deep Learning
by João Pedro Gomes de Souza, Aristófanes Corrêa Silva, Marcello Congro, Deane Roehl, Anselmo Cardoso de Paiva, Sandra Pereira and António Cunha
Electronics 2025, 14(21), 4208; https://doi.org/10.3390/electronics14214208 - 28 Oct 2025
Viewed by 233
Abstract
Fiber-reinforced concrete is a crucial material for civil construction, and monitoring its health is important for preserving structures and preventing accidents and financial losses. Among non-destructive monitoring methods, Micro Computed Tomography (Micro-CT) imaging stands out as an inexpensive method that is free from [...] Read more.
Fiber-reinforced concrete is a crucial material for civil construction, and monitoring its health is important for preserving structures and preventing accidents and financial losses. Among non-destructive monitoring methods, Micro Computed Tomography (Micro-CT) imaging stands out as an inexpensive method that is free from noise and external interference. However, manual inspection of these images is subjective and requires significant human effort. In recent years, several studies have successfully utilized Deep Learning models for the automatic detection of cracks in concrete. However, according to the literature, a gap remains in the context of detecting cracks using Micro-CT images of fiber-reinforced concrete. Therefore, this work proposes a framework for automatic crack detection that combines the following: (a) a super-resolution-based preprocessing to generate, for each image, versions with double and quadruple the original resolution, (b) a classification step using EfficientNetB0 to classify the type of concrete matrix, (c) specific training of Detection Transformer (DETR) models for each type of matrix and resolution, and (d) and a votation committee-based post-processing among the models trained for each resolution to reduce false positives. The model was trained on a new publicly available dataset, the FIRECON dataset, which consists of 4064 images annotated by an expert, achieving metrics of 86.098% Intersection over Union, 89.37% Precision, 83.26% Recall, 84.99% F1-Score, and 44.69% Average Precision. The framework, therefore, significantly reduces analysis time and improves consistency compared to the manual methods used in previous studies. The results demonstrate the potential of Deep Learning to aid image analysis in damage assessments, providing valuable insights into the damage mechanisms of fiber-reinforced concrete and contributing to the development of durable, high-performance engineering materials. Full article
Show Figures

Figure 1

38 pages, 72935 KB  
Article
Automated, Not Autonomous: Integrating Automated Mineralogy with Complementary Techniques to Refine and Validate Phase Libraries in Complex Mineral Systems
by Lisa I. Kearney, Andrew G. Christy, Elena A. Belousova, Benjamin R. Hines, Alkis Kontonikas-Charos, Mitchell de Bruyn, Henrietta E. Cathey and Vladimir Lisitsin
Minerals 2025, 15(11), 1118; https://doi.org/10.3390/min15111118 - 27 Oct 2025
Viewed by 196
Abstract
Accurate phase identification is essential for characterising complex mineral systems but remains a challenge in SEM-based automated mineralogy (AM) for compositionally variable rock-forming or accessory minerals. While platforms such as the Tescan Integrated Mineral Analyzer (TIMA) offer high-resolution phase mapping through BSE-EDS data, [...] Read more.
Accurate phase identification is essential for characterising complex mineral systems but remains a challenge in SEM-based automated mineralogy (AM) for compositionally variable rock-forming or accessory minerals. While platforms such as the Tescan Integrated Mineral Analyzer (TIMA) offer high-resolution phase mapping through BSE-EDS data, classification accuracy depends on the quality of the user-defined phase library. Generic libraries often fail to capture site-specific mineral compositions, resulting in misclassification and unclassified pixels, particularly in systems with solid solution behaviour, compositional zoning, and textural complexity. We present a refined approach to developing and validating custom TIMA phase libraries. We outline strategies for iterative rule refinement using mineral chemistry, textures, and BSE-EDS responses. Phase assignments were validated using complementary microanalytical techniques, primarily electron probe microanalysis (EPMA) and laser ablation inductively coupled plasma mass spectrometry (LA-ICPMS). Three Queensland case studies demonstrate this approach: amphiboles in an IOCG deposit; cobalt-bearing phases in a sediment-hosted Cu-Au-Co deposit; and Li-micas in an LCT pegmatite system. Targeted refinement of phases improves identification, reduces unclassified phases, and enables rare phase recognition. Expert-guided phase library development strengthens mineral systems research and downstream applications in geoscience, ore deposits, and critical minerals while integrating datasets across scales from cores to mineral mapping. Full article
Show Figures

Figure 1

33 pages, 4302 KB  
Article
Artificial Intelligence-Based Plant Disease Classification in Low-Light Environments
by Hafiz Ali Hamza Gondal, Seong In Jeong, Won Ho Jang, Jun Seo Kim, Rehan Akram, Muhammad Irfan, Muhammad Hamza Tariq and Kang Ryoung Park
Fractal Fract. 2025, 9(11), 691; https://doi.org/10.3390/fractalfract9110691 - 27 Oct 2025
Viewed by 281
Abstract
The accurate classification of plant diseases is vital for global food security, as diseases can cause major yield losses and threaten sustainable and precision agriculture. The classification of plant diseases in low-light noisy environments is crucial because crops can be continuously monitored even [...] Read more.
The accurate classification of plant diseases is vital for global food security, as diseases can cause major yield losses and threaten sustainable and precision agriculture. The classification of plant diseases in low-light noisy environments is crucial because crops can be continuously monitored even at night. Important visual cues of disease symptoms can be lost due to the degraded quality of images captured under low-illumination, resulting in poor performance of conventional plant disease classifiers. However, researchers have proposed various techniques for classifying plant diseases in daylight, and no studies have been conducted for low-light noisy environments. Therefore, we propose a novel model for classifying plant diseases from low-light noisy images called dilated pixel attention network (DPA-Net). DPA-Net uses a pixel attention mechanism and multi-layer dilated convolution with a high receptive field, which obtains essential features while highlighting the most relevant information under this challenging condition, allowing more accurate classification results. Additionally, we performed fractal dimension estimation on diseased and healthy leaves to analyze the structural irregularities and complexities. For the performance evaluation, experiments were conducted on two public datasets: the PlantVillage and Potato Leaf Disease datasets. In both datasets, the image resolution is 256 × 256 pixels in joint photographic experts group (JPG) format. For the first dataset, DPA-Net achieved an average accuracy of 92.11% and harmonic mean of precision and recall (F1-score) of 89.11%. For the second dataset, it achieved an average accuracy of 88.92% and an F1-score of 88.60%. These results revealed that the proposed method outperforms state-of-the-art methods. On the first dataset, our method achieved an improvement of 2.27% in average accuracy and 2.86% in F1-score compared to the baseline. Similarly, on the second dataset, it attained an improvement of 6.32% in average accuracy and 6.37% in F1-score over the baseline. In addition, we confirm that our method is effective with the real low-illumination dataset self-constructed by capturing images at 0 lux using a smartphone at night. This approach provides farmers with an affordable practical tool for early disease detection, which can support crop protection worldwide. Full article
Show Figures

Figure 1

26 pages, 1451 KB  
Article
Hierarchical Multi-Stage Attention and Dynamic Expert Routing for Explainable Gastrointestinal Disease Diagnosis
by Muhammad John Abbas, Hend Alshaya, Wided Bouchelligua, Nehal Hassan and Inzamam Mashood Nasir
Diagnostics 2025, 15(21), 2714; https://doi.org/10.3390/diagnostics15212714 - 27 Oct 2025
Viewed by 224
Abstract
Purpose: Gastrointestinal (GI) illness demands precise and efficient diagnostics, yet conventional approaches (e.g., endoscopy and histopathology) are time-consuming and prone to reader variability. This work presents GID-Xpert, a deep learning framework designed to improve feature learning, accuracy, and interpretability for GI disease classification. [...] Read more.
Purpose: Gastrointestinal (GI) illness demands precise and efficient diagnostics, yet conventional approaches (e.g., endoscopy and histopathology) are time-consuming and prone to reader variability. This work presents GID-Xpert, a deep learning framework designed to improve feature learning, accuracy, and interpretability for GI disease classification. Methods: GID-Xpert integrates a hierarchical, multi-stage attention-driven mixture of experts with dynamic routing. The architecture couples spatial–channel attention mechanisms with specialized expert blocks; a routing module adaptively selects expert paths to enhance representation quality and reduce redundancy. The model is trained and evaluated on three benchmark datasets—WCEBleedGen, GastroEndoNet, and the King Abdulaziz University Hospital Capsule (KAUHC) dataset. Comparative experiments against state-of-the-art baselines and ablation studies (removing attention, expert blocks, and routing) are conducted to quantify the contribution of each component. Results: GID-Xpert achieves superior performance with 100% accuracy on WCEBleedGen, 99.98% on KAUHC, and 75.32% on GastroEndoNet. Comparative evaluations show consistent improvements over contemporary models, while ablations confirm the additive benefits of spatial–channel attention, expert specialization, and dynamic routing. The design also yields reduced computational cost and improved explanation quality via attention-driven reasoning. Conclusion: By unifying attention, expert specialization, and dynamic routing, GID-Xpert delivers accurate, computationally efficient, and more interpretable GI disease classification. These findings support GID-Xpert as a credible diagnostic aid and a strong foundation for future extensions toward broader GI pathologies and clinical integration. Full article
(This article belongs to the Special Issue Medical Image Analysis and Machine Learning)
Show Figures

Figure 1

21 pages, 795 KB  
Article
Evaluation Method for the Development Effect of Reservoirs with Multiple Indicators in the Liaohe Oilfield
by Feng Ye, Yong Liu, Junjie Zhang, Zhirui Guan, Zhou Li, Zhiwei Hou and Lijuan Wu
Energies 2025, 18(21), 5629; https://doi.org/10.3390/en18215629 - 27 Oct 2025
Viewed by 182
Abstract
To address the limitation that single-index evaluation fails to fully reflect the development performance of reservoirs of different types and at various development stages, a multi-index comprehensive evaluation system featuring the workflow of “index screening–weight determination–model evaluation–strategy guidance” was established. Firstly, the grey [...] Read more.
To address the limitation that single-index evaluation fails to fully reflect the development performance of reservoirs of different types and at various development stages, a multi-index comprehensive evaluation system featuring the workflow of “index screening–weight determination–model evaluation–strategy guidance” was established. Firstly, the grey correlation analysis method (with a correlation degree threshold set at 0.65) was employed to screen 12 key evaluation indicators, including reservoir physical properties (porosity, permeability) and development dynamics (recovery factor, water cut, well activation rate). Subsequently, the fuzzy analytic hierarchy process (FAHP, for subjective weighting, with the consistency ratio (CR) of expert judgments < 0.1) was coupled with the attribute measurement method (for objective weighting, with information entropy redundancy < 5%) to determine the indicator weights, thereby balancing the influences of subjective experience and objective data. Finally, two evaluation models, namely the fuzzy comprehensive decision-making method and the unascertained measurement method, were constructed to conduct evaluations on 308 reservoirs in the Liaohe Oilfield (covering five major categories: integral medium–high-permeability reservoirs, complex fault-block reservoirs, low-permeability reservoirs, special lithology reservoirs, and thermal recovery heavy oil reservoirs). The results indicate that there are 147 high-efficiency reservoirs categorized as Class I and Class II in total. Although these reservoirs account for 47.7% of the total number, they control 71% of the geological reserves (154,548 × 104 t) and 78% of the annual oil production (738.2 × 104 t) in the oilfield, with an average well activation rate of 65.4% and an average recovery factor of 28.9. Significant quantitative differences are observed in the development characteristics of different reservoir types: Integral medium–high-permeability reservoirs achieve an average recovery factor of 37.6% and an average well activation rate of 74.1% by virtue of their excellent physical properties (permeability mostly > 100 mD), with Block Jin 16 (recovery factor: 56.9%, well activation rate: 86.1%) serving as a typical example. Complex fault-block reservoirs exhibit optimal performance at the stage of “recovery degree > 70%, water cut ≥ 90%”, where 65.6% of the blocks are classified as Class I, and the recovery factor of blocks with a “good” rating (42.3%) is 1.8 times that of blocks with a “poor” rating (23.5%). For low-permeability reservoirs, blocks with a rating below medium grade account for 68% of the geological reserves (8403.2 × 104 t), with an average well activation rate of 64.9%. Specifically, Block Le 208 (permeability < 10 mD) has an annual oil production of only 0.83 × 104 t. Special lithology reservoirs show polarized development performance, as Block Shugu 1 (recovery factor: 32.0%) and Biantai Buried Hill (recovery factor: 20.4%) exhibit significantly different development effects due to variations in fracture–vug development. Among thermal recovery heavy oil reservoirs, ultra-heavy oil reservoirs (e.g., Block Du 84 Guantao, with a recovery factor of 63.1% and a well activation rate of 92%) are developed efficiently via steam flooding, while extra-heavy oil reservoirs (e.g., Block Leng 42, with a recovery factor of 19.6% and a well activation rate of 30%) are constrained by reservoir heterogeneity. This system refines the quantitative classification boundaries for four development levels of water-flooded reservoirs (e.g., for Class I reservoirs in the high water cut stage, the recovery factor is ≥35% and the water cut is ≥90%), as well as the evaluation criteria for different stages (steam huff and puff, steam flooding) of thermal recovery heavy oil reservoirs. It realizes the transition from traditional single-index qualitative evaluation to multi-index quantitative evaluation, and the consistency between the evaluation results and the on-site development adjustment plans reaches 88%, which provides a scientific basis for formulating development strategies for the Liaohe Oilfield and other similar oilfields. Full article
Show Figures

Figure 1

14 pages, 370 KB  
Article
Integrating AI Systems in Criminal Justice: The Forensic Expert as a Corridor Between Algorithms and Courtroom Evidence
by Ido Hefetz
Forensic Sci. 2025, 5(4), 53; https://doi.org/10.3390/forensicsci5040053 - 27 Oct 2025
Viewed by 366
Abstract
Background: Artificial intelligence is transforming forensic fingerprint analysis by introducing probabilistic demographic inference alongside traditional pattern matching. This study explores how AI integration reshapes the role of forensic experts from interpreters of physical traces to epistemic corridors who validate algorithmic outputs and translate [...] Read more.
Background: Artificial intelligence is transforming forensic fingerprint analysis by introducing probabilistic demographic inference alongside traditional pattern matching. This study explores how AI integration reshapes the role of forensic experts from interpreters of physical traces to epistemic corridors who validate algorithmic outputs and translate them into legally admissible evidence. Methods: A conceptual proof-of-concept exercise compares traditional AFIS-based workflows with AI-enhanced predictive models in a simulated burglary scenario involving partial latent fingermarks. The hypothetical design, which does not rely on empirical validation, illustrates the methodological contrasts between physical and algorithmic inference. Results: The comparison demonstrates how AI-based demographic classification can generate investigative leads when conventional matching fails. It also highlights the evolving responsibilities of forensic experts, who must acquire competencies in statistical validation, bias detection, and explainability while preserving traditional pattern-recognition expertise. Conclusions: AI should augment rather than replace expert judgment. Forensic practitioners must act as critical mediators between computational inference and courtroom testimony, ensuring that algorithmic evidence meets legal standards of transparency, contestability, and scientific rigor. The paper concludes with recommendations for validation protocols, cross-laboratory benchmarking, and structured training curricula to prepare experts for this transformed epistemic landscape. Full article
(This article belongs to the Special Issue Feature Papers in Forensic Sciences)
Show Figures

Figure 1

19 pages, 2146 KB  
Article
Surfactant-Enriched Cross-Linked Scaffold as an Environmental and Manufacturing Feasible Approach to Boost Dissolution of Lipophilic Drugs
by Abdelrahman Y. Sherif, Doaa Hasan Alshora and Mohamed A. Ibrahim
Pharmaceutics 2025, 17(11), 1387; https://doi.org/10.3390/pharmaceutics17111387 - 26 Oct 2025
Viewed by 436
Abstract
Background/Objectives: The inherent low aqueous solubility of lipophilic drugs, belonging to Class II based on Biopharmaceutical classification system, negatively impacts their oral bioavailability. However, the manufacturing of pharmaceutical dosage forms for these drugs faces challenges related to environmental impact and production complexity. [...] Read more.
Background/Objectives: The inherent low aqueous solubility of lipophilic drugs, belonging to Class II based on Biopharmaceutical classification system, negatively impacts their oral bioavailability. However, the manufacturing of pharmaceutical dosage forms for these drugs faces challenges related to environmental impact and production complexity. Herein, the surfactant-enriched cross-linked scaffold addresses the limitations of conventional approaches, such as the use of organic solvents, energy-intensive processing, and the demand for sophisticated equipment. Methods: Scaffold former (Pluronic F68) and scaffold trigger agent (propylene glycol) were used to prepare cross-linked scaffold loaded with candesartan cilexetil as a model for lipophilic drugs. Moreover, surfactants were selected based on the measured solubility to enhance formulation loading capacity. Design-Expert was used to study the impact of Tween 80, propylene glycol, and Pluronic F68 concentrations on the measured responses. In addition, in vitro dissolution study was implemented to investigate the drug release profile. The current approach was assessed against the limitations of conventional approach in terms of environmental and manufacturing feasibility. Results: The optimized formulation (59.27% Tween 80, 30% propylene glycol, 10.73% Pluronic F68) demonstrated a superior drug loading capacity (19.3 mg/g) and exhibited a solid-to-liquid phase transition at 35.5 °C. Moreover, it exhibited a rapid duration of solid-to-liquid transition within about 3 min. In vitro dissolution study revealed a remarkable enhancement in dissolution with 92.87% dissolution efficiency compared to 1.78% for the raw drug. Conclusions: Surfactant-enriched cross-linked scaffold reduced environmental impact by eliminating organic solvents usage and reducing energy consumption. Moreover, it offers significant manufacturing advantages through simplified production processing. Full article
(This article belongs to the Section Physical Pharmacy and Formulation)
Show Figures

Graphical abstract

20 pages, 7276 KB  
Article
Semantic Segmentation of Coral Reefs Using Convolutional Neural Networks: A Case Study in Kiritimati, Kiribati
by Dominica E. Harrison, Gregory P. Asner, Nicholas R. Vaughn, Calder E. Guimond and Julia K. Baum
Remote Sens. 2025, 17(21), 3529; https://doi.org/10.3390/rs17213529 - 24 Oct 2025
Viewed by 257
Abstract
Habitat complexity plays a critical role in coral reef ecosystems by enhancing habitat availability, increasing ecological resilience, and offering coastal protection. Structure-from-motion (SfM) photogrammetry has become a standard approach for quantifying habitat complexity in reef monitoring programs. However, a major bottleneck remains in [...] Read more.
Habitat complexity plays a critical role in coral reef ecosystems by enhancing habitat availability, increasing ecological resilience, and offering coastal protection. Structure-from-motion (SfM) photogrammetry has become a standard approach for quantifying habitat complexity in reef monitoring programs. However, a major bottleneck remains in the two-dimensional (2D) classification of benthic cover in three-dimensional (3D) models, where experts are required to manually annotate individual colonies and identify coral species or taxonomic groups. With recent advances in deep learning and computer vision, automated classification of benthic habitats is possible. While some semi-automated tools exist, they are often limited in scope or do not provide semantic segmentation. In this investigation, we trained a convolutional neural network with the ResNet101 architecture on three years (2015, 2017, and 2019) of human-annotated 2D orthomosaics from Kiritimati, Kiribati. Our model accuracy ranged from 71% to 95%, with an overall accuracy of 84% and a mean intersection of union of 0.82, despite highly imbalanced training data, and it demonstrated successful generalizability when applied to new, untrained 2023 plots. Successful automation depends on training data that captures local ecological variation. As coral monitoring efforts move toward standardized workflows, locally developed models will be key to achieving fully automated, high-resolution classification of benthic communities across diverse reef environments. Full article
Show Figures

Figure 1

28 pages, 70123 KB  
Article
Synthetic Rebalancing of Imbalanced Macro Etch Testing Data for Deep Learning Image Classification
by Yann Niklas Schöbel, Martin Müller and Frank Mücklich
Metals 2025, 15(11), 1172; https://doi.org/10.3390/met15111172 - 23 Oct 2025
Viewed by 169
Abstract
The adoption of artificial intelligence (AI) in industrial manufacturing lags behind research progress, partly due to smaller, imbalanced datasets derived from real processes. In non-destructive aerospace testing, this challenge is amplified by the low defect rates of high-quality manufacturing. This study evaluates the [...] Read more.
The adoption of artificial intelligence (AI) in industrial manufacturing lags behind research progress, partly due to smaller, imbalanced datasets derived from real processes. In non-destructive aerospace testing, this challenge is amplified by the low defect rates of high-quality manufacturing. This study evaluates the use of synthetic data, generated via multiresolution stochastic texture synthesis, to mitigate class imbalance in material defect classification for the superalloy Inconel 718. Multiple datasets with increasing imbalance were sampled, and an image classification model was tested under three conditions: native data, data augmentation, and synthetic data inclusion. Additionally, round robin tests with experts assessed the realism and quality of synthetic samples. Results show that synthetic data significantly improved model performance on highly imbalanced datasets. Expert evaluations provided insights into identifiable artificial properties and class-specific accuracy. Finally, a quality assessment model was implemented to filter low-quality synthetic samples, further boosting classification performance to near the balanced reference level. These findings demonstrate that synthetic data generation, combined with quality control, is an effective strategy for addressing class imbalance in industrial AI applications. Full article
(This article belongs to the Special Issue Machine Learning Models in Metals (2nd Edition))
Show Figures

Figure 1

17 pages, 2735 KB  
Article
Relation Extraction in Spanish Medical Texts Using Deep Learning Techniques for Medical Knowledge Representation
by Gabriela A. García-Robledo, Maricela Bravo, Alma D. Cuevas-Rasgado, José A. Reyes-Ortiz and Josué Padilla-Cuevas
Appl. Sci. 2025, 15(21), 11352; https://doi.org/10.3390/app152111352 - 23 Oct 2025
Viewed by 159
Abstract
The extraction of relationships in natural language processing (NLP) is a task that consists of identifying interactions between entities within a text. This approach facilitates comprehension of context and meaning. In the medical field, this is of particular significance due to the substantial [...] Read more.
The extraction of relationships in natural language processing (NLP) is a task that consists of identifying interactions between entities within a text. This approach facilitates comprehension of context and meaning. In the medical field, this is of particular significance due to the substantial volume of information contained in scientific articles. This paper explores various training strategies for medical relationship extraction using large pre-trained language models. The findings indicate significant variations in performance between models trained with general domain data and those specialized in the medical domain. Furthermore, a methodology is proposed that utilizes language models for relation extraction with hyperparameter optimization techniques. This approach uses a triplet-based system. It provides a framework for the organization of relationships between entities and facilitates the development of medical knowledge graphs in the Spanish language. The training process was conducted using a dataset constructed and validated by medical experts. The dataset under consideration focused on relationships between entities, including anatomy, medications, and diseases. The final model demonstrated an 85.9% accuracy rate in the relationship classification task, thereby substantiating the efficacy of the proposed approach. Full article
Show Figures

Figure 1

55 pages, 5577 KB  
Article
Innovative Method for Detecting Malware by Analysing API Request Sequences Based on a Hybrid Recurrent Neural Network for Applied Forensic Auditing
by Serhii Vladov, Victoria Vysotska, Vitalii Varlakhov, Mariia Nazarkevych, Serhii Bolvinov and Volodymyr Piadyshev
Appl. Syst. Innov. 2025, 8(5), 156; https://doi.org/10.3390/asi8050156 - 21 Oct 2025
Viewed by 317
Abstract
This article develops a method for detecting malware based on the multi-scale recurrent architecture (time-aware multi-scale LSTM) with salience gating, multi-headed attention, and a sequential statistical change detector (CUSUM) integration. The research aim is to create an algorithm capable of effectively detecting malicious [...] Read more.
This article develops a method for detecting malware based on the multi-scale recurrent architecture (time-aware multi-scale LSTM) with salience gating, multi-headed attention, and a sequential statistical change detector (CUSUM) integration. The research aim is to create an algorithm capable of effectively detecting malicious activities in behavioural data streams of executable files with minimal delay and ensuring interpretability of the results for subsequent use in forensic audit and cyber defence systems. To implement the task, deep learning methods (training LSTM models with dynamic consideration of time intervals and adaptive attention mechanisms) and sequence statistical analysis (CUSUM, Kulback–Leibler divergence, and Wasserstein distances), as well as regularisation approaches to improve the model stability and explainability, were used. Experimental evaluation demonstrates the proposed approaches’ high efficiency, with the neural network model achieving competitive indicators of accuracy, recall, and classification balance with a low level of false positives and an acceptable detection delay. Attention and salience profile analysis confirmed the possibility of interpreting signals and early detection of abnormal events, which reduces the experts’ workload and reduces the number of false positives. This study introduces the new hybrid architecture development that combines the advantages of recurrent and statistical methods, the theoretical properties formalisation of gated cells for long-term memory, and the proposal of a practical approach to the model solutions’ explainability. The developed method implementation, implemented in the specialised software product form, is shown in a forensic audit. Full article
Show Figures

Figure 1

Back to TopTop