Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,221)

Search Parameters:
Keywords = targeting workflow

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
12 pages, 1273 KB  
Article
Logistics-Mediated Artificial Sympatry and Its Implications for Molecular Detection of Hylurgus ligniperda
by Jijing Han, Jiaying Wang, Junxia Cui, Li Liu, Xianfeng Chen, Yuhao Cao, Jiaojiao Chen and Xuemei Song
Insects 2026, 17(4), 408; https://doi.org/10.3390/insects17040408 (registering DOI) - 9 Apr 2026
Abstract
International timber trade has accelerated the global spread of the invasive red-haired pine bark beetle H. ligniperda, posing persistent challenges to phytosanitary inspection and border biosecurity. Rapid isothermal amplification assays are increasingly deployed in frontline quarantine settings to support timely regulatory decisions. [...] Read more.
International timber trade has accelerated the global spread of the invasive red-haired pine bark beetle H. ligniperda, posing persistent challenges to phytosanitary inspection and border biosecurity. Rapid isothermal amplification assays are increasingly deployed in frontline quarantine settings to support timely regulatory decisions. However, their performance under the heterogeneous biological backgrounds typical of traded timber remains insufficiently evaluated, particularly with respect to the practical implications of low-level false-positive signals. We re-evaluated a previously reported isothermal assay for H. ligniperda using conditions that simulate timber transport and routine customs workflows. Fifty non-target arthropod species (predominantly insects), selected from quarantine interception records, were included to represent taxa likely to co-occur in operational contexts. Material from Lema decempunctata consistently generated weak but reproducible amplification signals across replicates. Sanger sequencing excluded contamination, confirming low-level non-target amplification in complex biological matrices. Although the signals were faint, ambiguous results in quarantine settings may trigger shipment detention, confirmatory laboratory testing, or temporary trade restrictions, thereby increasing inspection workload, delaying clearance, and generating avoidable compliance costs. These findings indicate that trade-mediated species assemblages can compromise assay performance beyond expectations derived from conventional taxonomy-based specificity testing. To reduce interpretive uncertainty and associated regulatory burden, we propose a tiered diagnostic workflow combining rapid on-site isothermal screening with specificity-oriented SYBR Green qPCR confirmation. This strategy enhances diagnostic reliability while preserving operational efficiency in applied biosecurity surveillance. Full article
Show Figures

Figure 1

29 pages, 1915 KB  
Article
Evaluation of Global Data for National-Scale Soil Depth Mapping in Data-Scarce Regions: A Case Study from Sri Lanka
by Ebrahim Jahanshiri, Eranga M. Wimalasiri, Yinan Yu and Ranjith B. Mapa
Soil Syst. 2026, 10(4), 47; https://doi.org/10.3390/soilsystems10040047 - 9 Apr 2026
Abstract
High-resolution soil depth maps are valuable for environmental modelling, yet reliable data remains scarce in the tropics. This study evaluates the feasibility of mapping depth to bedrock (DTB) in Sri Lanka using a legacy dataset (n = 88) and global environmental covariates (n [...] Read more.
High-resolution soil depth maps are valuable for environmental modelling, yet reliable data remains scarce in the tropics. This study evaluates the feasibility of mapping depth to bedrock (DTB) in Sri Lanka using a legacy dataset (n = 88) and global environmental covariates (n = 247). A robust machine learning workflow was employed—including feature selection, hyperparameter tuning, and a stacked ensemble of four algorithms (Random Forest, XGBoost, Cubist, SVM)—to test the limits of global data for local mapping. Despite rigorous optimization, the final ensemble model achieved a performance of R2 = 0.197 (RMSE = 35.4 cm) under spatial cross-validation. While still modest, this result significantly outperforms existing global products and quantifies the “prediction gap” inherent in using ~1 km resolution global covariates to model micro-scale soil variability. An initial exploration involved log-transforming the target variable; however, following rigorous testing, the untransformed depth was modelled directly to avoid bias in back-transformation. A robustness experiment was further conducted, reducing predictors from 24 to 12, which degraded performance, confirming that the model captures complex, physically meaningful climatic interactions rather than fitting noise. The study concludes that while global covariates can capture regional meso-scale trends (explaining ~20% of variance), they are insufficient for resolving local micro-relief (<50 m). The resulting map and uncertainty products provide a critical “baseline” for national planning, but effectively demonstrate that future improvements will require investment in higher-resolution local covariates (e.g., LiDAR) rather than more complex algorithms. Full article
(This article belongs to the Special Issue Use of Modern Statistical Methods in Soil Science)
23 pages, 3218 KB  
Article
A Rapid Hairy Root-Based Platform for CRISPR/Cas Optimization and Guide RNA Validation in Lettuce
by Alberico Di Pinto, Valentina Forte, Chiara D’Attilia, Marco Possenti, Barbara Felici, Floriana Augelletti, Giovanna Sessa, Monica Carabelli, Giorgio Morelli, Giovanna Frugis and Fabio D’Orso
Plants 2026, 15(8), 1161; https://doi.org/10.3390/plants15081161 - 9 Apr 2026
Abstract
Cultivated lettuce (Lactuca sativa L.) is a major leafy crop and an emerging model for functional genomics within the Asteraceae family, supported by high-quality reference genomes and efficient transformation systems. Although CRISPR/Cas technology offers powerful opportunities for crop improvement, editing efficiency depends [...] Read more.
Cultivated lettuce (Lactuca sativa L.) is a major leafy crop and an emerging model for functional genomics within the Asteraceae family, supported by high-quality reference genomes and efficient transformation systems. Although CRISPR/Cas technology offers powerful opportunities for crop improvement, editing efficiency depends on optimized construct architecture and reliable guide RNA (gRNA) validation. However, a rapid platform for evaluating CRISPR reagents in lettuce is still lacking. Here, we developed an efficient hairyroot-based system to accelerate CRISPR/Cas genome editing optimization in L. sativa. Four Agrobacterium rhizogenes strains were compared for hairy root induction in two cultivars, ‘Saladin’ and ‘Osiride’, identifying strain ATCC15834 as the most effective based on transformation frequency and root production. Using this platform, we evaluated multiple CRISPR construct configurations, including alternative promoters for nuclease and gRNA expression. A plant-derived promoter combined with At-pU6-26 variant significantly improved editing efficiency. As a proof of concept, we targeted LsHB2, the putative ortholog of Arabidopsis thaliana ATHB2, a key regulator of the shade avoidance response using SpCas9, SaCas9, and LbCas12a nucleases. The system enabled rapid genotyping and quantitative indel profiling. Overall, this workflow provides a robust framework for efficient guide selection and construct optimization in lettuce genome editing. Full article
(This article belongs to the Section Plant Development and Morphogenesis)
Show Figures

Figure 1

27 pages, 3278 KB  
Article
Multimodal PPG-Based Arrhythmia Detection Using a CLIP-Initialized Multi-Task U-Net and LLM-Assisted Reporting
by Youngho Huh, Minhwan Noh, Dongwoo Ji, Yuna Oh and Sukkyu Sun
Sensors 2026, 26(8), 2316; https://doi.org/10.3390/s26082316 - 9 Apr 2026
Abstract
Photoplethysmography (PPG) has emerged as an attractive modality for non-invasive cardiovascular monitoring due to its low cost, unobtrusive nature, and ubiquity in consumer wearable devices. Despite its potential, existing PPG-based arrhythmia detection systems remain limited in scope: (i) most target only atrial fibrillation, [...] Read more.
Photoplethysmography (PPG) has emerged as an attractive modality for non-invasive cardiovascular monitoring due to its low cost, unobtrusive nature, and ubiquity in consumer wearable devices. Despite its potential, existing PPG-based arrhythmia detection systems remain limited in scope: (i) most target only atrial fibrillation, (ii) temporal localization of abnormal segments is rarely provided, and (iii) deep learning models lack explainability, hindering adoption in clinical workflows. We present a comprehensive and fully integrated framework for multi-class arrhythmia detection, segmentation, and explainability based on PPG waveforms, Heart Rate Variability (HRV), and structured clinical metadata. The proposed system introduces a CLIP-style contrastive learning module aligning PPG waveforms with clinical variables and rhythm-state textual descriptions using BioBERT; a multitask U-Net architecture performing 4-class classification and 1D segmentation; a Retrieval-Augmented Generation (RAG) pipeline leveraging Gemini Flash large language models to produce guideline-grounded diagnostic reports; and a real-time Streamlit-based web platform supporting inference, visualization, and database storage. The system significantly improves classification accuracy (from 86.27% to 91.19%) and segmentation Dice (from 0.5815 to 0.7167). These results demonstrate the feasibility of a robust, multimodal, and explainable PPG-based arrhythmia monitoring system for real-world applications. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

30 pages, 28721 KB  
Article
Dual-Arm Robotic Textile Unfolding with Depth-Corrected Perception and Fold Resolution
by Tilla Egerhei Båserud, Joakim Johansen, Ajit Jha and Ilya Tyapin
Robotics 2026, 15(4), 78; https://doi.org/10.3390/robotics15040078 - 8 Apr 2026
Abstract
Reliable textile recycling requires automated unfolding to expose hidden hard components such as zippers, buttons, and metal fasteners, which otherwise risk damaging machinery and compromising downstream processes. This paper presents the design and implementation of an automated textile unfolding system based on a [...] Read more.
Reliable textile recycling requires automated unfolding to expose hidden hard components such as zippers, buttons, and metal fasteners, which otherwise risk damaging machinery and compromising downstream processes. This paper presents the design and implementation of an automated textile unfolding system based on a dual-arm robotic manipulation framework. The system uses two Interbotix WidowX 250s 6-DoF robotic arms and an Intel RealSense L515 LiDAR camera for visual perception. The unfolding process consists of three stages: initial dual-arm stretching to reduce major folds, refinement through a second stretch targeting the lower region, and a machine-learning stage that employs a YOLOv11 framework trained on depth-encoded textile images, followed by a depth-gradient-based estimator for fold direction. The system applies an extremity-based grasping strategy that selects leftmost and rightmost textile points from a custom error-corrected depth map, enabling robust grasp point selection, and a fold direction estimation method based on depth gradients around the detected fold. The most confident fold region is selected, an unfolding direction is determined using depth ranking, and the textile is manipulated until a flat state is confirmed through depth uniformity. Experiments show that depth correction significantly reduces spatial error in the robot frame, while segmentation and extremity detection achieve high accuracy across varied fold configurations, and the YOLOv11n-based model reaches 98.8% classification accuracy, while fold direction is estimated correctly in 87% of test cases. By enabling robust, largely autonomous textile unfolding, the system demonstrates a practical approach that could support safer and more efficient automated textile recycling workflows. Full article
(This article belongs to the Section Sensors and Control in Robotics)
Show Figures

Figure 1

18 pages, 682 KB  
Article
Staff Attitudes Toward Healthcare Waste Separation: An Exploratory Survey from a Triple-Bottom-Line Perspective
by Julia Nike Sturm, Mark Berneburg, Bernadett Kurz and Dennis Niebel
Healthcare 2026, 14(8), 975; https://doi.org/10.3390/healthcare14080975 - 8 Apr 2026
Abstract
Background: In 2022, the German healthcare system generated 400,000 tons of waste. Reducing this number could lower greenhouse gas emissions. The waste management plan at the University Medical Center Regensburg, and those of other comparable German facilities, require that glass, cardboard/paper, residual waste, [...] Read more.
Background: In 2022, the German healthcare system generated 400,000 tons of waste. Reducing this number could lower greenhouse gas emissions. The waste management plan at the University Medical Center Regensburg, and those of other comparable German facilities, require that glass, cardboard/paper, residual waste, and other non-hazardous materials are collected separately. Objectives: To assess the personal interest, proficiency, opinion, and awareness of waste management among German dermatology staff to develop customized, resource-saving process optimization and training programs. Methods: An online cross-sectional survey was conducted among German dermatology healthcare professionals between 27 February and 4 October 2024. Out of the 100 responses, 84 were complete and subsequently analyzed. Respondents included staff at dermatology wards, outpatient units, and private practices. Data were analyzed descriptively; comparisons were made between clinics and outpatient units, and correlations were identified among the items. Results: Most respondents perceived the amount of waste generated during wound dressing changes as high; more than 60% expressed an interest in receiving further training on sustainability and waste reduction. Although many respondents reported having a good understanding of waste separation, they identified time pressure and stress as the two main obstacles to consistent implementation. Higher self-reported knowledge did not correspond with greater confidence in recycling as an effective waste reduction measure. Conclusions: The findings suggest a discrepancy between awareness and practice regarding sustainable waste management in dermatology. Combining structural and organizational measures with targeted training and workflow optimization could promote more sustainable clinical practices. Full article
(This article belongs to the Section Healthcare and Sustainability)
Show Figures

Figure 1

35 pages, 10124 KB  
Article
An Integrated BIM–NLP Framework for Design-Informed Automated Construction Schedule Generation
by Mahmoud Donia, Emad Elbeltagi, Ahmed Elhakeem and Hossam Wefki
Designs 2026, 10(2), 43; https://doi.org/10.3390/designs10020043 - 7 Apr 2026
Abstract
Artificial intelligence has attracted increasing attention in the construction industry; however, automated time scheduling remains limited in practical applications. Schedule development remains manual, requiring planners to analyze project documents, define activities, estimate durations, and identify relationships based on logical sequence. This process primarily [...] Read more.
Artificial intelligence has attracted increasing attention in the construction industry; however, automated time scheduling remains limited in practical applications. Schedule development remains manual, requiring planners to analyze project documents, define activities, estimate durations, and identify relationships based on logical sequence. This process primarily depends on individual experience and skills, making it both time-consuming and prone to human error. From an engineering design perspective, delayed or inconsistent schedule development weakens design-to-construction feedback, limiting the ability to evaluate constructability and time implications of alternative design decisions during early-stage planning. This study proposes an integrated BIM–Natural Language Processing (NLP) framework to automate activity identification, duration estimation, and logical sequencing for construction scheduling. The framework extracts project data from Revit, organizes it into a bill of quantities format, and then generates an activity list, each activity with a unique ID. Using Sentence-BERT (SBERT) embeddings, the framework estimates activity durations based on semantic similarity. The same semantic process is combined with rule-based reasoning to identify logical relationships, including sequences, supported by an Excel-based reference dictionary that includes logical relationships, productivity, and ID structure. Finally, the framework incorporates a crashing module that proportionally adjusts the duration of activities on the longest path to target the project’s completion time without violating relationships. The proposed framework was validated using real construction project data and produced reliable results. By producing a tool-ready schedule directly from design-model information, the proposed workflow enables earlier schedule feedback loops and supports design-informed planning by allowing designers and planners to assess the time consequences of model-driven scope changes. The results demonstrate that integrating BIM and NLP can transform conventional schedules into faster, more consistent processes, thereby supporting the construction industry. Full article
24 pages, 2118 KB  
Article
Interpretable QSAR and Complementary Docking for PARP1 Inhibitor Prioritization: Reliability Stratification and Near-Domain Screening
by Alaa M. Elsayad and Khaled A. Elsayad
Pharmaceuticals 2026, 19(4), 584; https://doi.org/10.3390/ph19040584 - 7 Apr 2026
Viewed by 20
Abstract
Background/Objectives: Poly(ADP-ribose) polymerase 1 (PARP1) is an important therapeutic target in DNA repair-deficient cancers, but discovery of new inhibitors remains constrained by scaffold convergence, tolerability limits, and acquired resistance. This study aimed to develop an interpretable, reliability-stratified cheminformatics workflow for PARP1 potency [...] Read more.
Background/Objectives: Poly(ADP-ribose) polymerase 1 (PARP1) is an important therapeutic target in DNA repair-deficient cancers, but discovery of new inhibitors remains constrained by scaffold convergence, tolerability limits, and acquired resistance. This study aimed to develop an interpretable, reliability-stratified cheminformatics workflow for PARP1 potency prioritization and structure-based follow-up. Methods: A curated ChEMBL dataset of 3339 PARP1 inhibitors was encoded using RDKit 2D descriptors and Avalon fingerprints (1143 initial features), then reduced to 132 informative variables by Random Forest-based feature selection. Five regression models were optimized, including a stacked ensemble. Model interpretation was performed using permutation feature importance and SHAP. External near-domain corroboration was assessed using a stringent PubChem similarity expansion (Tanimoto > 0.90) around sub-10 nM seed compounds, followed by comparison with retrievable experimental PARP1 activity values. Top scaffold-diverse candidates were further evaluated by complementary docking against PARP1 (PDB: 4R6E) using AutoDock Vina and cavity-guided docking through the SwissDock platform. Results: The stacked ensemble achieved the best held-out performance (test R2 = 0.723; RMSE = 0.610 pIC50 units), with 83.7% of test predictions within ≤0.75 pIC50 units and only 2.7% exceeding 1.5 pIC50 units. PubChem similarity expansion retrieved approximately 32,450 analogs, of which 3349 were predicted to have IC50 ≤ 10 nM. Among 366 compounds with retrievable experimental PARP1 activity values, predicted versus experimental pIC50 showed a positive association (R2 = 0.124; Pearson r = 0.479), with RMSE = 0.491 and MAE = 0.330 pIC50 units. Three ligands—CID 168873053, CID 175154210, and CID 172894737—showed the strongest complementary docking support and pocket-consistent poses relative to niraparib. Conclusions: This workflow provides a transparent and practically useful framework for near-domain PARP1 inhibitor prioritization. The combined QSAR, explainability, external corroboration, and docking strategy supports shortlist generation for experimental follow-up. Full article
(This article belongs to the Section Medicinal Chemistry)
Show Figures

Graphical abstract

17 pages, 830 KB  
Review
Digital Assessment of Metacognition Across the Psychosis Continuum: Measures, Validity, and Clinical Integration—A Scoping Review
by Vassilis Martiadis, Fabiola Raffone, Salvatore Clemente, Antonietta Massa and Domenico De Berardis
Medicina 2026, 62(4), 704; https://doi.org/10.3390/medicina62040704 - 7 Apr 2026
Viewed by 35
Abstract
Background and Objectives: Metacognition-related processes (e.g., confidence calibration, self-evaluation and the use of feedback) have been linked to cognitive insight, self-evaluation, and daily functioning in psychosis. However, clinic-based assessments only provide limited information. Digital methods may capture state-like variations and contextual factors, but [...] Read more.
Background and Objectives: Metacognition-related processes (e.g., confidence calibration, self-evaluation and the use of feedback) have been linked to cognitive insight, self-evaluation, and daily functioning in psychosis. However, clinic-based assessments only provide limited information. Digital methods may capture state-like variations and contextual factors, but it is unclear to what extent they operationalise core metacognitive monitoring constructs versus adjacent self-evaluative/insight-related constructs. We mapped digital approaches used to assess metacognition-related constructs across the psychosis spectrum, summarising the associated feasibility and validity. Materials and Methods: We conducted a scoping review (PRISMA-ScR) of psychosis-spectrum studies that used digital tools to assess metacognition-related targets. These included ecological momentary assessment/experience sampling (EMA/ESM), task-based paradigms with confidence ratings, and hybrid approaches. Searches covered MEDLINE (via PubMed), Scopus, and IEEE Xplore, with the final search run on 15 December 2025. We charted constructs, operationalisations, feasibility/engagement indices and reported links with clinical or functional measures. Results: The empirical evidence map comprised 13 studies directly assessing metacognition-related constructs; eight additional implementation/methodological sources were synthesised separately to contextualise feasibility, reporting, ethics, and governance. EMA studies more often assessed adjacent self-evaluative constructs, including context-linked self-appraisal bias, conviction, and self-report–context mismatch in daily life, whereas task-based studies more directly assessed confidence–accuracy calibration and feedback updating. Across EMA studies, greater momentary symptom severity and more restricted contexts were often associated with inflated self-evaluations and divergence from observer-rated functioning. Task-based studies indicated that confidence calibration and feedback utilisation may diverge from objective performance; in performance-controlled paradigms, some studies reported comparable metacognitive sensitivity/efficiency, but the overall evidence remains uncertain. Passive sensing was common in psychosis research but was rarely explicitly tied to metacognitive constructs. Conclusions: Current digital work spans both core metacognitive monitoring constructs and adjacent self-evaluative/insight-related constructs, rather than a single unitary construct. Clinical translation remains hypothesis-generating: interpretability may be improved by combining clinical anchors, low-burden EMA, and optional contextual streams, but thresholds, workflows, and signal-action rules require prospective validation. Full article
Show Figures

Figure 1

29 pages, 768 KB  
Review
Beyond Reanalysis: Critical Issues in Data Reuse for Solid Tumor Proteomics
by Federica Franzetti, Nicole Giugni, Manuel Airoldi, Heather Bondi, Tiziana Alberio and Mauro Fasano
Proteomes 2026, 14(2), 16; https://doi.org/10.3390/proteomes14020016 - 7 Apr 2026
Viewed by 54
Abstract
Proteomics represents a fundamental layer for understanding the molecular complexity of solid tumors by quantifying protein abundance and capturing proteoforms and post-translational modifications undetected in genomics or transcriptomics analyses. As mass spectrometry-based technologies and public proteomics repositories have expanded, opportunities for large-scale data [...] Read more.
Proteomics represents a fundamental layer for understanding the molecular complexity of solid tumors by quantifying protein abundance and capturing proteoforms and post-translational modifications undetected in genomics or transcriptomics analyses. As mass spectrometry-based technologies and public proteomics repositories have expanded, opportunities for large-scale data reuse have grown accordingly. Nevertheless, data availability has not been translated into straightforward reuse: differences in experimental design, acquisition strategies, quantification workflows and metadata quality still limit the reproducibility and cross-study comparability. In this review, proteomics data reuse is defined as the systematic reanalysis and integration of publicly available datasets to support precision oncology applications such as biomarker assessment and antibody–drug conjugate target prioritization. We discuss reuse as an end-to-end analytical process, focusing on data analysis workflows, harmonization strategies, and the impact of heterogeneous experimental and analytical choices on interoperability. The increased application of artificial intelligence in proteomics data integration and reuse is also addressed, highlighting its analytical potential while underscoring the risks of overinterpretation when biological context and data structure are not adequately considered. Using colorectal and prostate cancer as representative examples, we illustrate how proteomics data reuse can support biological discovery and translational research, while critically examining the factors that limit robustness and clinical relevance. Full article
Show Figures

Figure 1

17 pages, 673 KB  
Article
Quality of Drug Allergy Documentation in a Resource-Limited Paper-Based Hospital in Pakistan: Audit of Concordance and Completeness
by Akef Obeidat, Athar Ud Din, Muhammad Amir Khan, Amara Asad Khan, Eshal Atif, Muhammad Atif Mazhar, Muhammad Zain Khan and Sadia Qazi
Healthcare 2026, 14(7), 957; https://doi.org/10.3390/healthcare14070957 - 6 Apr 2026
Viewed by 221
Abstract
Background/Objectives: Accurate drug allergy documentation is essential for patient safety; however, documentation quality remains poor worldwide. In resource-limited settings that rely on paper records, allergy information may become fragmented across multiple forms, and evidence on concordance between paper-based documentation systems is limited. This [...] Read more.
Background/Objectives: Accurate drug allergy documentation is essential for patient safety; however, documentation quality remains poor worldwide. In resource-limited settings that rely on paper records, allergy information may become fragmented across multiple forms, and evidence on concordance between paper-based documentation systems is limited. This audit assessed concordance between clinical notes and drug Kardex records, and completeness of drug allergy documentation entries, in a manual hospital system. Methods: This retrospective clinical audit, reported in accordance with SQUIRE 2.0 guidelines, examined 88 randomly selected patient records from 525 consecutive admissions to a general medicine ward in Pakistan during June–July 2024, retrospectively reviewed in August 2024. The audit assessed allergy status documentation in clinical notes and the drug Kardex, evaluated completeness against five internationally recommended elements (drug name, reaction description, severity, date, and treatment), and measured inter-system concordance using McNemar’s test and Cohen’s kappa. Results: Drug allergy status was documented in 25.0% of clinical notes (95% CI: 16.5–35.4%) versus 94.3% of drug Kardex records (95% CI: 87.2–98.1%), representing a 69.3 percentage-point gap (McNemar χ2 = 59.06, p < 0.001). Inter-system agreement was poor (κ = 0.0079; 95% CI: −0.046 to 0.062), with an overall concordance of 28.4%. Discordant pairs showed that undocumented allergy status was far more likely in clinical notes than in the drug Kardex (OR = 62.00). Kardex-only documentation occurred in 62 of 88 patients (70.5%). Among nine patients with documented allergy history in at least one source, none met the five-element completeness standards (0%; 95% CI: 0.0–33.6%). Recorded entries were generic statements such as “drug allergy” or “allergic to antibiotics” without clinically actionable details. Conclusions: Drug allergy documentation showed two major quality failures: poor concordance between parallel paper records and lack of actionable detail in recorded entries. The two systems functioned independently rather than as complementary safety checks, with allergy information often present in the drug Kardex but absent from clinical notes. This Kardex-only failure mode may be a practical target for quality improvement through structured five-element templates, prompts for clinicians to review the drug Kardex, and interdisciplinary allergy-reconciliation workflows. These strategies require prospective evaluation in this setting. Full article
(This article belongs to the Section Healthcare Quality, Patient Safety, and Self-care Management)
Show Figures

Figure 1

41 pages, 3961 KB  
Review
Open-Source Molecular Docking and AI-Augmented Structure-Based Drug Design: Current Workflows, Challenges, and Opportunities
by Faizul Azam and Suliman A. Almahmoud
Int. J. Mol. Sci. 2026, 27(7), 3302; https://doi.org/10.3390/ijms27073302 - 5 Apr 2026
Viewed by 622
Abstract
Molecular docking is a foundational technique in computational drug discovery, widely used to generate binding hypotheses, prioritize compounds, and support target-selectivity studies. The continued growth of open-source docking resources, together with improvements in scoring functions, sampling strategies, and hardware acceleration, has substantially lowered [...] Read more.
Molecular docking is a foundational technique in computational drug discovery, widely used to generate binding hypotheses, prioritize compounds, and support target-selectivity studies. The continued growth of open-source docking resources, together with improvements in scoring functions, sampling strategies, and hardware acceleration, has substantially lowered barriers to teaching, early-stage hit identification, and reproducible research. Beyond standalone docking engines, the open-source ecosystem now encompasses browser-accessible tools, preparation and analysis utilities, integrative modeling platforms, and AI-augmented methods for pose prediction, rescoring, and virtual screening. These developments have made docking workflows more accessible, customizable, and transparent across diverse research settings. This review examines open-source docking from a workflow-centered perspective, spanning study design, structural-data acquisition, binding-site definition, receptor and ligand preparation, docking execution, and post-docking validation. It further evaluates how open AI methods are being incorporated into these stages to expand structural coverage, improve screening efficiency, and support contemporary structure-based drug design. Collectively, this review outlines a practical and evidence-based framework for the effective use of open-source docking and virtual-screening pipelines in modern drug discovery. Full article
Show Figures

Figure 1

22 pages, 5489 KB  
Article
Parametric Form-Finding for 3D-Printed Housing: A Computational Workflow from Generative Exploration to Architectural Development
by Rodrigo Garcia-Alvarado, Pedro Soza-Ruiz and Eduardo Valenzuela-Astudillo
Appl. Sci. 2026, 16(7), 3527; https://doi.org/10.3390/app16073527 - 3 Apr 2026
Viewed by 244
Abstract
Additive manufacturing in construction is expanding production possibilities for housing, however its integration into architectural design workflows remains limited. This research proposes a computational workflow for the early-stage form-finding of housing volumes intended for additive construction. A parametric design system was developed to [...] Read more.
Additive manufacturing in construction is expanding production possibilities for housing, however its integration into architectural design workflows remains limited. This research proposes a computational workflow for the early-stage form-finding of housing volumes intended for additive construction. A parametric design system was developed to generate a wide range of residential volumetric configurations based on geometric parameters derived from conventional housing typologies and emerging 3D-printed construction practices. The design space was explored through user-driven experimentation and automated evolutionary optimization targeting predefined surface area conditions. Besides design alternatives were visualized using AI-assisted image generation to support comparative evaluation, translated into BIM models for further architectural development, and tested through physical 3D-printed scale models to assess material expression and constructability. Five design exploration activities involving architects and graduate students produced nearly 200 volumetric alternatives, in order to review its use and possibilities. The results show that the parametric system enables efficient exploration of both conventional and novel housing forms potentially compatible with additive construction. Vertically articulated volumes with curved envelopes and spatial variation emerged as promising alternatives. The study demonstrates the potential of integrating parametric modeling, evolutionary search, AI-assisted visualization, and physical prototyping to support architectural decision-making and facilitate the incorporation of 3D printing into housing design processes. Full article
(This article belongs to the Topic Additive Manufacturing: From Promise to Practice)
Show Figures

Figure 1

21 pages, 1026 KB  
Article
A Spatial and Cluster-Based Framework for Identifying Railroad Trespassing Hotspots
by Habeeb Mohammed, Rongfang Liu and Steven Jiang
Systems 2026, 14(4), 396; https://doi.org/10.3390/systems14040396 - 3 Apr 2026
Viewed by 227
Abstract
Rail trespassing remains a persistent safety challenge at the system level in the United States, with a 24% increase in incidents within the last decade (2016–2025). Identifying hotspots proactively is difficult due to limited incident data and strong spatial dependencies within the built [...] Read more.
Rail trespassing remains a persistent safety challenge at the system level in the United States, with a 24% increase in incidents within the last decade (2016–2025). Identifying hotspots proactively is difficult due to limited incident data and strong spatial dependencies within the built environment. This study thus creates a ZIP-code–level geospatial analytics framework to identify current and emerging trespassing hotspots across North Carolina by combining land-use composition, rail exposure metrics, and historical Federal Railroad Administration (FRA) trespassing records. Geospatial layers were integrated within a GIS workflow to derive attributes such as rail miles, grade crossings, population density, and land-use types. Exploratory spatial analysis showed significant clustering of trespassing incidents, with Global Moran’s I indicating positive spatial autocorrelation across multiple neighborhood sizes. Permutation z-scores confirmed non-random hotspot formation along major rail corridors. A k-means clustering method also identified four structural risk environments, and a Composite Risk Index (CRI) was developed from weighted, standardized exposure and land-use variables to quantify latent risk, independent of raw casualty counts. Results indicate that clusters characterized by higher rail infrastructure exposure and mixed land-use environments exhibit the highest CRI values and elevated hotspot probabilities. In contrast, clusters with limited rail infrastructure, including predominantly commercial and rural ZIP codes, show substantially lower risk levels. The findings highlight that trespassing risk is more strongly associated with structural exposure conditions than with isolated historical incident counts. The resulting risk surfaces and hotspots provide an interpretable and scalable framework for statewide safety planning, early hotspot detection, and targeted interventions by transportation agencies. Full article
(This article belongs to the Special Issue Multimodal and Intermodal Transportation Systems in the AI Era)
Show Figures

Figure 1

34 pages, 56063 KB  
Article
Deep Learning-Based Intelligent Analysis of Rock Thin Sections: From Cross-Scale Lithology Classification to Grain Segmentation for Quantitative Fabric Characterization
by Wenhao Yang, Ang Li, Liyan Zhang and Xiaoyao Qin
Electronics 2026, 15(7), 1509; https://doi.org/10.3390/electronics15071509 - 3 Apr 2026
Viewed by 233
Abstract
Quantitative microstructure evaluation of sedimentary rock thin sections is essential for revealing reservoir flow mechanisms and assessing reservoir quality. However, traditional manual identification is inefficient and prone to subjectivity. Although current deep learning approaches have improved efficiency, most remain confined to single tasks [...] Read more.
Quantitative microstructure evaluation of sedimentary rock thin sections is essential for revealing reservoir flow mechanisms and assessing reservoir quality. However, traditional manual identification is inefficient and prone to subjectivity. Although current deep learning approaches have improved efficiency, most remain confined to single tasks and lack a pathway to translate image recognition into quantifiable geological parameters. Moreover, these methods struggle with cross-scale feature extraction and accurate grain boundary localization in complex textures. To overcome these limitations, this study proposes a three-stage automated analysis framework integrating intelligent lithology identification, sandstone grain segmentation, and quantitative analysis of fabric parameters. To address scale discrepancies in lithology discrimination, Rock-PLionNet integrates a Partial-to-Whole Context Fusion (PWC-Fusion) module and the Lion optimizer, which mitigates cross-scale feature inconsistencies and enables accurate screening of target sandstone samples. Subsequently, to correct boundary deviations caused by low contrast and grain adhesion, the PetroSAM-CRF strategy integrates polarization-aware enhancement with dense conditional random field (DenseCRF)-based probabilistic refinement to extract precise grain contours. Based on these outputs, the framework automatically calculates key fabric parameters, including grain size and roundness. Experiments on 3290 original multi-source thin-section images show that Rock-PLionNet achieves a classification accuracy of 96.57% on the test set. Furthermore, PetroSAM-CRF reduces segmentation bias observed in general-purpose models under complex texture conditions, enabling accurate parameter estimation with a roundness error of 2.83%. Overall, this study presents an intelligent workflow linking microscopic image recognition with quantitative analysis of geological fabric parameters, providing a practical pathway for digital petrographic evaluation in hydrocarbon exploration. Full article
Show Figures

Figure 1

Back to TopTop