Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (31,560)

Search Parameters:
Keywords = Re27

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 266 KB  
Article
The Engineered Messiah: Islamic Theology as Source Code in the Post-Cybernetic Universe of Dune
by Nimetullah Aldemir and Emrullah Ataseven
Religions 2026, 17(3), 372; https://doi.org/10.3390/rel17030372 (registering DOI) - 17 Mar 2026
Abstract
Frank Herbert’s Dune (1965) establishes a universe defined by the “Butlerian Jihad”, a historical crusade that banned artificial intelligence and created a vacuum filled by religious engineering. This paper argues that in this post-cybernetic setting, religion functions as a sociological operating system designed [...] Read more.
Frank Herbert’s Dune (1965) establishes a universe defined by the “Butlerian Jihad”, a historical crusade that banned artificial intelligence and created a vacuum filled by religious engineering. This paper argues that in this post-cybernetic setting, religion functions as a sociological operating system designed for political control rather than a metaphysical connection to the divine. The study analyzes the Missionaria Protectiva to demonstrate how the Bene Gesserit order creates belief systems by co-opting and re-engineering Islamic theology. It suggests that the order’s manual of superstitions serves as a library of cultural scripts that primes the indigenous population to accept a manufactured Messiah, specifically the Mahdi. Consequently, the protagonist Paul Atreides is reinterpreted not as a traditional “White Savior” or authentic religious prophet but as a “hacker” who utilizes these pre-planted Islamic codes to access and manipulate the social infrastructure of Arrakis. His prescience functions as a form of biological predictive analytics that traps him in a deterministic loop of his own calculation. Ultimately, this reading suggests that Dune offers a critique of “techno-theology” by showing how the instrumentalization of the Mahdi figure transforms the concept of Jihad from a spiritual struggle into an unstoppable, automated algorithm of violence. Full article
(This article belongs to the Special Issue Religion in 20th- and 21st-Century Fictional Narratives)
27 pages, 1023 KB  
Article
MoRe: LLM-Based Domain Model Generation with Hybrid Self-Refinement
by Ru Chen, Jingwei Shen and Xiao He
Electronics 2026, 15(6), 1239; https://doi.org/10.3390/electronics15061239 (registering DOI) - 17 Mar 2026
Abstract
Generating domain models from requirements is a vital and complex challenge in automated software engineering. Although large language models (LLMs) have exhibited significant competence in this area, their propensity for hallucination frequently results in models that are redundant, inconsistent, or structurally unsound. To [...] Read more.
Generating domain models from requirements is a vital and complex challenge in automated software engineering. Although large language models (LLMs) have exhibited significant competence in this area, their propensity for hallucination frequently results in models that are redundant, inconsistent, or structurally unsound. To enhance the quality of automatically generated models, this paper introduces MoRe, an LLM-based approach to domain model generation with self-refinement. Within our approach, an LLM is first tasked with producing an initial domain model draft. Subsequently, a hybrid refinement—combining LLMs with a rule-based scanner—is employed to identify and correct common issues in the model. An empirical study was conducted using 30 domain modeling problems and four open-source LLMs. The results indicate that MoRe significantly improves the quality of generated domain models. This paper advocates for incorporating a self-refinement phase as a standard component in any automated modeling workflow. Full article
Show Figures

Figure 1

22 pages, 2065 KB  
Article
Thermo-Mechanical Design of the C/C-SiC-Based Thermal Protection Structure for the Forebody of the Hypersonic Sounding Rocket STORT
by Giuseppe Daniele Di Martino, Thomas Reimer, Luis Baier, Lucas Dauth, Dorian Hargarten and Ali Gülhan
Aerospace 2026, 13(3), 278; https://doi.org/10.3390/aerospace13030278 (registering DOI) - 16 Mar 2026
Abstract
Re-entry flights of reusable first or upper stages typically foresee phases in the hypersonic flight regime, characterized by severe aero-thermal loads which could become critical for the most exposed components, like the vehicle forebody or the fin leading edges. These require consequently dedicated [...] Read more.
Re-entry flights of reusable first or upper stages typically foresee phases in the hypersonic flight regime, characterized by severe aero-thermal loads which could become critical for the most exposed components, like the vehicle forebody or the fin leading edges. These require consequently dedicated thermal protection systems (TPS), whose design generally requires a multi-disciplinary approach. In this framework, a viable solution is the use of high-temperature resistant ceramic matrix composite (CMC) structures, but the implementation of such technology, especially for the manufacturing of complex components and its application in real flight conditions, still presents significant challenges. In this work, the design activities for the CMC-based TPS of the payload forebody of a hypersonic sounding rocket are presented, developed within the framework of the STORT project, whose mission includes in flight demonstration of multiple critical technologies required for sustained flight at Mach numbers above 8, corresponding to a significantly high integral thermal load. Full article
(This article belongs to the Section Aeronautics)
16 pages, 314 KB  
Article
Effects of Guanidinoacetic Acid and Metabolizable Energy Levels on Performance and Nutrient Metabolism in Broilers
by Patrícia Tomazini Medeiros, Edenilse Gopinger, Everton Luis Krabbe, Victor Naranjo, José Henrique Stringhini and Alex Maiorka
Animals 2026, 16(6), 935; https://doi.org/10.3390/ani16060935 (registering DOI) - 16 Mar 2026
Abstract
The effects of three metabolizable energy (ME) levels and the use of guanidinoacetic acid (GAA) were evaluated on broiler performance and nutrient digestibility from 1 to 35 d of age. In total, 1944-d-old Ross AP95 male broilers were randomly distributed to six treatments [...] Read more.
The effects of three metabolizable energy (ME) levels and the use of guanidinoacetic acid (GAA) were evaluated on broiler performance and nutrient digestibility from 1 to 35 d of age. In total, 1944-d-old Ross AP95 male broilers were randomly distributed to six treatments (12 replicates/treatment). Diets were formulated to contain three ME levels (standard energy [SE], −50 kcal/kg reduced energy [−50 RE] and −100 kcal/kg reduced energy [−100 RE]) in all feeding phases with or without GAA inclusion. For the nutrient-metabolizable analysis, 960-day-old male broilers were separately raised in floor pens until 14 d of age and randomly distributed to six treatments (16 replicates/treatment). Data were analyzed with ANOVA and Tukey’s test at p ≤ 0.05. There was a significant interaction for the feed conversion ratio (FCR) at 21 days, in which the PC diet showed better FCR when GAA was included. In the evaluation of the main effects, an effect of metabolizable energy (ME) was observed on body condition score (BCS) at 7 and 21 days, feed intake (FI) at 21 and 35 days, and feed conversion ratio (FCR) at 21 days, with the PC diet showing better FCR and lower FI. An effect of GAA was observed on feed conversion ratio at 21 days, with the inclusion of GAA in the diet showing better FCR. In conclusion, broilers fed SE diets with GAA, beyond better performance, had improved AME and AMEn compared to broilers fed RE diets without GAA in starter diets. Full article
(This article belongs to the Section Poultry)
27 pages, 1186 KB  
Review
Gap Junction–Mediated Communication in Melanoma: From Tumor Progression to Treatment Response
by Juliana Massoud, Sarah Ibrahim, Madison Jensen, Michael C. Beary, Ben Nafchi, Michael Springer and Shoshanna N. Zucker
Int. J. Mol. Sci. 2026, 27(6), 2705; https://doi.org/10.3390/ijms27062705 - 16 Mar 2026
Abstract
Melanoma is a highly malignant neoplasm of the skin with early metastatic spread and increasing incidence worldwide. Although there are significant therapeutic advances in immunotherapy, especially with the checkpoint inhibitors targeting PD-1 and CTLA-4, challenges such as treatment-related toxicities, a heterogeneous response to [...] Read more.
Melanoma is a highly malignant neoplasm of the skin with early metastatic spread and increasing incidence worldwide. Although there are significant therapeutic advances in immunotherapy, especially with the checkpoint inhibitors targeting PD-1 and CTLA-4, challenges such as treatment-related toxicities, a heterogeneous response to therapy, and drug resistance continue to exist. There are unmet needs for novel therapeutic strategies and/or approaches to complement the existing treatment options. Potential targets for future melanoma treatment are the gap junction proteins, connexins, which show an altered pattern of regulation during melanoma progression. In this review, we highlight the regulation of gap junctions during melanoma progression and the characterization of gap junctions as tumor suppressors during early-stage tumor development and then the reversion to enhancers of tumor metastasis during late-stage melanoma progression. We provide a comprehensive overview of gap junctions in the skin and how the connexin proteins, which comprise gap junctions, are alternatively regulated in melanoma progression. Connexins are protein channels in the human body that consist of 21 isoforms. These isoforms form gap junctions that provide important intercellular signaling and permeability channels. Each connexin protein consists of four transmembrane domains and a C-terminal tail, which is an important part of its function and regulation. Permeants of gap junctions include signaling molecules such as cyclic AMP and inositol triphosphate which are linked to key cellular behaviors such as proliferation and migration, making them essential for several tumor-related processes. At least ten connexin isoforms are found in normal skin. Connexin 43 (Cx43) is classified as the most prevalent isoform while Connexin 26 (Cx26) has been reported to be more specialized with restricted expression patterns. Cx43 and Cx26 regulate the growth, differentiation, and repair of the epidermis after injury. Evidence suggests that connexins have a stage-related function in melanoma. Loss of connexin expression and gap junctional intercellular communication is linked to tumor suppression and loss of differentiation in early-stage melanoma, while re-expression or overexpression of specific connexins, notably Cx43, may promote metastasis through enhanced tumor–stromal interactions and increased motility in late-stage melanoma. Such opposing actions of connexins support their candidacy as biomarkers and therapeutic targets. Understanding the dual-stage related functions of connexins in melanoma development and progression may lead to less cytotoxic and more efficient future therapeutic approaches. Full article
Show Figures

Figure 1

26 pages, 1479 KB  
Article
Changes in PSA-Based Early Detection of Prostate Cancer over a 12-Year Period: Findings from the German KABOT Study
by Kay-Patrick Braun, Torsten Vogel, Matthias May, Christian Gilfrich, Markus Herrmann, Anton P. Kravchuk, Julia Maurer and Ingmar Wolff
Healthcare 2026, 14(6), 747; https://doi.org/10.3390/healthcare14060747 - 16 Mar 2026
Abstract
Background: The effectiveness of prostate-specific antigen (PSA)-based early detection of prostate cancer remains controversial and implementation-dependent. Screening policy changes have substantially altered PSA testing behavior in the United States, yet longitudinal evidence from non-organized European settings is limited. We assessed 12-year changes in [...] Read more.
Background: The effectiveness of prostate-specific antigen (PSA)-based early detection of prostate cancer remains controversial and implementation-dependent. Screening policy changes have substantially altered PSA testing behavior in the United States, yet longitudinal evidence from non-organized European settings is limited. We assessed 12-year changes in awareness and utilization of PSA-based early detection and identified subgroups requiring targeted counseling. Methods: Two cross-sectional survey waves were conducted in 2009 (Study Phase 1) and 2021 (Study Phase 2) among men recruited via general practitioner practices in urban and rural regions of Germany. The survey was developed and reported according to the Consensus-Based Checklist for Reporting of Survey Studies (CROSS). Identical questionnaires were used across phases. Endpoints were awareness of PSA-based early detection and prior PSA testing. Univariable and multivariable logistic regression evaluated independent associations with sociodemographic and behavioral factors. To assess sensitivity to compositional differences between survey waves, post-stratified weighted analyses re-aligning Study Phase 2 to the Study Phase 1 distribution of age category, educational attainment, and smoking status were conducted. Results: The analytic cohort comprised 890 men (Study Phase 1, n = 755; Study Phase 2, n = 135). Compared with Study Phase 1, Study Phase 2 participants more frequently were non-smokers (63.0% vs. 48.5%, p < 0.001) and had a university degree (38.5% vs. 30.5%, p = 0.002). In primary multivariable analyses, higher educational attainment (OR 1.71, 95% CI 1.24–2.36) and paternity (OR 1.94, 95% CI 1.25–3.01) were independently associated with greater awareness, whereas increasing age (OR 1.39, 95% CI 1.29–1.50) and higher educational attainment (OR 1.63, 95% CI 1.19–2.24) were independently associated with utilization. Study phase was not independently associated with either endpoint in primary models. In post-stratified sensitivity analyses, study phase was positively associated with utilization, indicating sensitivity of temporal contrasts to population composition. Conclusions: In primary multivariable analyses, we did not detect statistically significant temporal differences in awareness or utilization of PSA-based early detection within this German non-organized setting. The emergence of a study phase effect in weighted sensitivity analyses suggests that apparent time trends may be influenced by compositional differences between survey waves. Persistent social gradients, particularly related to educational attainment, underscore the importance of targeted, evidence-based counseling in opportunistic early detection systems. Larger, prospectively designed studies are needed to distinguish true temporal change from sampling-related effects. Full article
(This article belongs to the Special Issue Clinical Updates in Prostate Cancer and Bladder Cancer)
Show Figures

Graphical abstract

44 pages, 1183 KB  
Article
Towards Reliable LLM Grading Through Self-Consistency and Selective Human Review: Higher Accuracy, Less Work
by Luke Korthals, Emma Akrong, Gali Geller, Hannes Rosenbusch, Raoul Grasman and Ingmar Visser
Mach. Learn. Knowl. Extr. 2026, 8(3), 74; https://doi.org/10.3390/make8030074 - 16 Mar 2026
Abstract
Large language models (LLMs) show promise for grading open-ended assessments but still exhibit inconsistent accuracy, systematic biases, and limited reliability across assignments. To address these concerns, we introduce SURE (Selective Uncertainty-based Re-Evaluation), a human-in-the-loop pipeline that combines repeated LLM prompting, uncertainty-based flagging, and [...] Read more.
Large language models (LLMs) show promise for grading open-ended assessments but still exhibit inconsistent accuracy, systematic biases, and limited reliability across assignments. To address these concerns, we introduce SURE (Selective Uncertainty-based Re-Evaluation), a human-in-the-loop pipeline that combines repeated LLM prompting, uncertainty-based flagging, and selective human regrading. Three LLMs—gpt-4.1-nano, gpt-5-nano, and the open-source gpt-oss-20b—graded answers of 46 students to 130 open questions and coding exercises across five assignments. Each student answer was scored 20 times to derive majority-voted predictions and self-consistency-based certainty estimates. We simulated human regrading by flagging low-certainty cases and replacing them with scores from four human graders. We used the first assignment as a training set for tuning certainty thresholds and to explore LLM output diversification via sampling parameters, rubric shuffling, varied personas, multilingual prompts, and post hoc ensembles. We then evaluated the effectiveness and efficiency of SURE on the other four assignments using a fixed certainty threshold. Across assignments, fully automated grading with a single prompt resulted in substantial underscoring, and majority-voting based on 20 prompts improved but did not eliminate this bias. Low certainty (i.e., high output diversity) was diagnostic of incorrect LLM scores, enabling targeted human regrading that improved grading accuracy while reducing manual grading time by 40–90%. Aggregating responses from all three LLMs in an ensemble improved certainty-based flagging and most consistently approached human-level accuracy, with 70–90% of the grades students would receive falling inside human-grader ranges. A reanalysis based on outputs from a more diversified LLM ensemble comprised of gpt-5, codestral-25.01, and llama-3.3-70b-instruct replicated these findings but also suggested that large reasoning models such as gpt-5 might eliminate the need for human oversight of LLM grading entirely. These findings demonstrate that self-consistency-based uncertainty estimation and selective human oversight can substantially improve the reliability and efficiency of AI-assisted grading. Full article
(This article belongs to the Section Learning)
18 pages, 2493 KB  
Article
Improved Kernel Correlation Filtering Algorithm Integrating Scale Adaptation and Occlusion Redetection
by Tianbo Liu, Yuya Wang, Hong Sun and Shuai Yuan
Appl. Sci. 2026, 16(6), 2843; https://doi.org/10.3390/app16062843 - 16 Mar 2026
Abstract
To address the limitations of the Kernelized Correlation Filter (KCF) in handling scale variation and occlusion during visual tracking, this paper proposes a scale-adaptive and occlusion-robust KCF-based tracking method. The proposed approach integrates the Histogram of Oriented Gradients (HOGs) and Color Name (CN) [...] Read more.
To address the limitations of the Kernelized Correlation Filter (KCF) in handling scale variation and occlusion during visual tracking, this paper proposes a scale-adaptive and occlusion-robust KCF-based tracking method. The proposed approach integrates the Histogram of Oriented Gradients (HOGs) and Color Name (CN) features to fully exploit pixel-level information, thereby improving the accuracy of target localization. On this basis, a sub-region-based scale adaptation mechanism is introduced. Specifically, the target is partitioned into multiple sub-regions, and the KCF classifier is applied to each sub-region to estimate its center position. The relative displacement among these sub-region centers is then utilized to estimate target scale variation, enabling adaptive scale tracking. In addition, an occlusion-aware mechanism is designed to enhance robustness under occlusion. During tracking, occlusion detection is performed, and once occlusion is detected, template updating is suspended. Oriented FAST and Rotated BRIEF (ORB) features extracted from the template are subsequently matched with features from subsequent frames to re-acquire the target. Experimental results on the OTB2013 and OTB2015 benchmarks demonstrate that the proposed method achieves competitive precision and success rates compared with the baseline KCF and other representative trackers, while satisfying real-time tracking requirements using only CPU resources, indicating its practical applicability in resource-constrained environments. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

19 pages, 410 KB  
Article
Asymptotic Non-Hermitian Degeneracy Phenomenon and Its Exactly Solvable Simulation
by Miloslav Znojil
Symmetry 2026, 18(3), 506; https://doi.org/10.3390/sym18030506 (registering DOI) - 16 Mar 2026
Abstract
A conceptually consistent understanding is sought for the interactions sampled by the imaginary cubic oscillator with potential V(ICO)(x)=ix3, which is by itself not acceptable as a meaningful quantum model due [...] Read more.
A conceptually consistent understanding is sought for the interactions sampled by the imaginary cubic oscillator with potential V(ICO)(x)=ix3, which is by itself not acceptable as a meaningful quantum model due to a combination of its non-Hermiticity, unboundedness, and most of all the Riesz-basis non-diagonalizability of the Hamiltonian, known as its intrinsic exceptional point (IEP) feature. For the purposes of a perturbation-theory-based simulation of the emergence of such a singular system, a simplified (though not too strictly related) toy-model Hamiltonian is proposed. It combines an Npoint discretization of the real line of coordinates with an ad hoc interaction in a two-parametric N-by-N-matrix Hamiltonian H=H(N)(A,B). After such a simplification, one can still encounter a somewhat weaker form of non-diagonalizability at the conventional Kato’s exceptional-point (EP) limit of parameters (A,B)(A(EP),B(EP)). The IEP-non-diagonalizability phenomenon itself appears mimicked by the less enigmatic EP degeneracy of the discrete toy model, especially at large N1. What we gain is that, in contrast to the IEP case, the regularization of the simplified toy model in vicinity to the black conventional EP becomes feasible. Full article
(This article belongs to the Special Issue Symmetry in Classical and Quantum Gravity and Field Theory)
Show Figures

Figure 1

16 pages, 6152 KB  
Article
DisasterReliefGPT: Multimodal AI for Autonomous Disaster Impact Assessment and Crisis Communication
by Lekshmi Chandrika Reghunath, Athikkal Sudhir Abhishek, Arjun Changat, Arjun Unnikrishnan, Ayush Kumar Rai, Christian Napoli and Cristian Randieri
Technologies 2026, 14(3), 179; https://doi.org/10.3390/technologies14030179 - 16 Mar 2026
Abstract
The work presented herein proposes DisasterReliefGPT, a multimodal AI system for automation in the areas of crisis communication and post-disaster assessment. The system integrates three tightly coupled components: a vision module called DisasterOCS for structural damage detection in satellite images, a Large Vision–Language [...] Read more.
The work presented herein proposes DisasterReliefGPT, a multimodal AI system for automation in the areas of crisis communication and post-disaster assessment. The system integrates three tightly coupled components: a vision module called DisasterOCS for structural damage detection in satellite images, a Large Vision–Language Model (LVLM) for enhanced visual understanding and contextual reasoning, and a Large Language Model (LLM) to produce detailed, clear assessment reports. DisasterOCS relies on a ResNet34-based encoder with partial weight sharing and event-specific decoders, coupled with a custom MultiCrossEntropyDiceLoss function for multi-class segmentation on pre- and post-disaster image pairs. On the benchmark xBD dataset, the developed system reaches a high score of 78.8% in identifying F1-damage, making correct identifications of destroyed buildings with 81.3% precision, while undamaged structures are found with a very high value of 90.7%. From a combination of these components, emergency responders can immediately provide reliable and readable assessments of damage that can be used to directly support urgent decision-making. Full article
Show Figures

Graphical abstract

23 pages, 2010 KB  
Article
Visibility-Prior Guided Dual-Stream Mixture-of-Experts for Robust Facial Expression Recognition Under Complex Occlusions
by Siyuan Ma, Long Liu, Mingzhi Cheng, Peijun Qin, Zixuan Han, Cui Chen, Shizhao Yang and Hongjuan Wang
Electronics 2026, 15(6), 1230; https://doi.org/10.3390/electronics15061230 - 16 Mar 2026
Abstract
Facial occlusion induces sample-wise reliability shifts in facial expression recognition (FER), where the usefulness of global context and local discriminative cues varies dramatically with the amount of visible facial information. Existing occlusion-robust FER studies often evaluate under limited or homogeneous occlusion settings and [...] Read more.
Facial occlusion induces sample-wise reliability shifts in facial expression recognition (FER), where the usefulness of global context and local discriminative cues varies dramatically with the amount of visible facial information. Existing occlusion-robust FER studies often evaluate under limited or homogeneous occlusion settings and commonly adopt static fusion strategies, which are insufficient for complex and heterogeneous real-world occlusions. In this work, we establish a rigorous occlusion robustness evaluation protocol by constructing a fixed offline test benchmark with diverse synthetic occlusion patterns (e.g., masks, sunglasses, texture blocks, and mixed occlusions) on top of public FER test splits. We further propose a Dual-Stream Adaptive Weighting Mixture-of-Experts framework (DS-AW-MoE) that fuses a global contextual expert and a local discriminative expert via an occlusion-aware weighting network. Crucially, we introduce a facial visibility assessment as a task-agnostic prior to explicitly regulate expert contributions, enabling dynamic re-allocation of model capacity according to input-dependent feature reliability. Extensive experiments on public datasets and the constructed occlusion benchmark demonstrate that DS-AW-MoE achieves more stable recognition under complex occlusions, characterized by a smaller and more consistent performance drop. To support reproducibility under dataset license constraints, we will release an anonymous, fully runnable repository containing the complete occlusion synthesis pipeline, evaluation protocol, and configuration files, allowing researchers to reproduce the benchmark after obtaining the original datasets. Full article
(This article belongs to the Topic Computer Vision and Image Processing, 3rd Edition)
Show Figures

Figure 1

20 pages, 460 KB  
Article
Training-Free Quantum Architecture Search Under Realistic Noise via Expressibility-Guided Evolution
by Seyedali Mousavi, Seyedhamidreza Mousavi, Paul Pettersson and Masoud Daneshtalab
Entropy 2026, 28(3), 330; https://doi.org/10.3390/e28030330 - 16 Mar 2026
Abstract
Designing noise-robust parameterized quantum circuits (PQCs) is a central challenge in the noisy intermediate-scale quantum (NISQ) regime. Existing quantum architecture search methods rely on training large SuperCircuits and evaluating SubCircuits under noisy execution, resulting in high computational cost and architecture assessments that depend [...] Read more.
Designing noise-robust parameterized quantum circuits (PQCs) is a central challenge in the noisy intermediate-scale quantum (NISQ) regime. Existing quantum architecture search methods rely on training large SuperCircuits and evaluating SubCircuits under noisy execution, resulting in high computational cost and architecture assessments that depend on task-specific optimization and device noise. In this work, we propose a training-free quantum architecture search framework based on information-theoretic expressibility measures rather than performance-based estimators. We empirically show that noise-free KL-divergence-based expressibility exhibits a consistent monotonic association with noisy task loss across diverse circuit architectures and realistic hardware noise models. Leveraging this relationship, we introduce an expressibility-guided evolutionary search that requires neither SuperCircuit training nor noisy execution during the search phase. Since expressibility is evaluated independently of hardware noise, the method is inherently device-agnostic, enabling architectures to be reused across multiple quantum devices without re-running the search. Experiments using IBM-derived Qiskit noise models demonstrate that the proposed approach achieves competitive performance compared to SuperCircuit-based baselines, while substantially reducing computational cost. These results establish expressibility as an effective information-theoretic surrogate for ranking PQC architectures under realistic noise. Full article
(This article belongs to the Section Quantum Information)
Show Figures

Figure 1

17 pages, 1610 KB  
Article
GNN-MA: Soft Molecular Alignment with Cross-Graph Attention for Ligand-Based Virtual Screening
by Keling Liu, Dongmei Wei, Rui Shi and Zhiyuan Zhou
Molecules 2026, 31(6), 991; https://doi.org/10.3390/molecules31060991 - 16 Mar 2026
Abstract
Ligand-based virtual screening (LBVS) seeks strong early enrichment when searching ultra-large libraries, but practical screening often relies on 1D/2D descriptions while 3D information is expensive and uncertain due to conformer generation and alignment. We propose GNN-MA, a retrieval-style pairwise scoring model for query–candidate [...] Read more.
Ligand-based virtual screening (LBVS) seeks strong early enrichment when searching ultra-large libraries, but practical screening often relies on 1D/2D descriptions while 3D information is expensive and uncertain due to conformer generation and alignment. We propose GNN-MA, a retrieval-style pairwise scoring model for query–candidate molecular pairs that uses molecular graphs as a unified representation. Built on intra-graph message passing, GNN-MA adds cross-graph attention to learn atom-level soft alignment that focuses on key substructures relevant to activity matching, and introduces a bond-to-atom semantic aggregation module to better exploit chemical bond cues for similarity scoring. The framework uses 2D molecular graphs derived from SMILES for retrieval-style matching and does not rely on explicit 3D conformational modeling or alignment. Experiments on DUD-E and LIT-PCBA show that GNN-MA achieves competitive overall discrimination (ROC-AUC) and, relative to its ablated variants, provides consistent gains in early-enrichment metrics (EF@1–5%) on DUD-E, while on LIT-PCBA the improvements are more target-dependent. The learned atom-level soft alignment also provides a qualitative interpretability cue in case studies. Throughput benchmarks suggest that GNN-MA is most suitable as a re-ranking/refinement model after a fast prefiltering stage. Full article
(This article belongs to the Section Computational and Theoretical Chemistry)
Show Figures

Figure 1

22 pages, 2762 KB  
Article
Automated Classification of Medical Image Modality and Anatomy
by Jean de Smidt, Kian Anderson and Andries Engelbrecht
Algorithms 2026, 19(3), 222; https://doi.org/10.3390/a19030222 - 16 Mar 2026
Abstract
Radiological departments face challenges in efficiency and diagnostic consistency. The interpretation of radiographs remains highly variable between practitioners, which creates potential disparities in patient care. This study explores how artificial intelligence (AI), specifically transfer learning techniques, can automate parts of the radiological workflow [...] Read more.
Radiological departments face challenges in efficiency and diagnostic consistency. The interpretation of radiographs remains highly variable between practitioners, which creates potential disparities in patient care. This study explores how artificial intelligence (AI), specifically transfer learning techniques, can automate parts of the radiological workflow to improve service quality and efficiency. Transfer learning methods were applied to various convolutional neural network (CNN) architectures and compared to classify medical images across different modalities, i.e., X-rays, ultrasound, magnetic resonance imaging (MRI), and angiography, through a two-component model: medical image modality prediction and anatomical region prediction. Several publicly available datasets were combined to create a representative dataset to evaluate residual networks (ResNet), dense networks (DenseNet), efficient networks (EfficientNet), and the Swin Transformer (Swin-T). The models were evaluated through accuracy, precision, recall, and F1-score metrics with macro-averaging to account for class imbalance. The results demonstrate that lightweight transfer learning methods effectively classify medical imagery, with an accuracy of 97.21% on test data for the combined transfer learning pipeline. EfficientNet-B4 demonstrated the best performance on both components of the proposed pipeline and achieved a 99.6% accuracy for modality prediction and 99.21% accuracy for anatomical region prediction on unseen test data. This approach offers the potential for streamlined radiological workflows while maintaining diagnostic quality. The strong model performance across diverse modalities and anatomical regions indicates robust generalisability for practical implementation in clinical settings. Full article
(This article belongs to the Special Issue Advances in Deep Learning-Based Data Analysis)
Show Figures

Figure 1

16 pages, 1673 KB  
Article
DeepSarcAE: A Deep Autoencoder Framework for Learning Gait Dynamics in the Detection of Sarcopenia
by Muthamil Balakrishnan, Janardanan Kumar, Jaison Jacob Mathunny, Varshini Karthik and Ashok Kumar Devaraj
Biophysica 2026, 6(2), 20; https://doi.org/10.3390/biophysica6020020 - 16 Mar 2026
Abstract
Sarcopenia is a degenerative musculoskeletal condition recognised as the age-related decline in skeletal muscle mass, strength, and function. Traditional diagnostic methods are limited by cost, accessibility, and subjectivity. This study aimed to develop a non-invasive, AI-driven, video-based framework for early Sarcopenia detection using [...] Read more.
Sarcopenia is a degenerative musculoskeletal condition recognised as the age-related decline in skeletal muscle mass, strength, and function. Traditional diagnostic methods are limited by cost, accessibility, and subjectivity. This study aimed to develop a non-invasive, AI-driven, video-based framework for early Sarcopenia detection using functional movement analysis. Participants with and without Sarcopenia were recorded performing functional movements such as level walking, stair climbing, and ramp walking. Ten representative frames were extracted from each participant, resulting in 300 images (150 Sarcopenic, 150 non-Sarcopenic) utilised for the study. The DeepSarcAE model is an integrated framework of an autoencoder and a CNN-based classifier. Its performance was benchmarked against pretrained architectures such as EfficientNet, ResNet, MobileNet, Inception, VGG16 and four custom CNN models. Evaluation metrics such as sensitivity, specificity, precision, negative predictive value (NPV), accuracy, and AUC were used to analyse the models. DeepSarcAE outperformed all other models, attaining 100% sensitivity, 83.33% specificity, 85.71% precision, 100% NPV, 91.67% accuracy, and an AUC of 0.96. VGG16 and MobileNet followed the performance of DeepSarcAE closely, while the Inception network exhibited the weakest results due to poor generalisation. TheDeepSarcAE framework offers a scalable, cost-effective, and non-invasive approach for Sarcopenia screening from the input gait image frames. Its promising preliminary performance highlights the potential of deep learning in early diagnosis and clinical decision support in preventive healthcare. Full article
Show Figures

Figure 1

Back to TopTop