Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

Search Results (174)

Search Parameters:
Keywords = iterative support detection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 2957 KB  
Article
Automating the Detection of Evasive Windows Malware: An Evaluated YARA Rule Library for Anti-VM and Anti-Sandbox Techniques
by Sebastien Kanj, Gorka Vila and Josep Pegueroles
J. Cybersecur. Priv. 2026, 6(2), 69; https://doi.org/10.3390/jcp6020069 - 8 Apr 2026
Abstract
Anti-analysis techniques, also known as evasive techniques, enable Windows malware to detect and evade dynamic inspection environments, undermining the effectiveness of virtual-machine and sandbox-based inspection. Despite extensive prior research, no unified classification has been paired with a large-scale empirical evaluation of static detection [...] Read more.
Anti-analysis techniques, also known as evasive techniques, enable Windows malware to detect and evade dynamic inspection environments, undermining the effectiveness of virtual-machine and sandbox-based inspection. Despite extensive prior research, no unified classification has been paired with a large-scale empirical evaluation of static detection capabilities for these behaviors. This paper addresses this gap by presenting a comprehensive classification and detection framework. We consolidate 94 anti-analysis techniques from academic, community, and threat-intelligence sources into nine mechanistic categories and derive corresponding YARA rules for static identification. In total, 82 YARA signatures were authored or refined and evaluated on 459,508 malware and 92,508 goodware samples. After iterative refinement using precision thresholds, 42 rules achieved high accuracy (≥75%), 16 showed moderate precision (50–75%), and 24 were discarded due to unreliability. The results indicate strong static detectability for firmware- and BIOS-based checks, but limited precision for timing-based evasions, which frequently overlap with benign behavior. Although YARA provides broad coverage of observable artifacts, its static nature limits detection under obfuscation or runtime mutation; our measurements therefore represent conservative estimates of technique prevalence. All validated rules are released in an open-source repository to support reproducibility, improve incident-response workflows, and strengthen prevention and mitigation against real-world threats. Future work will explore hybrid validation, container-evasion extensions, and forensic attribution based on signature co-occurrence patterns. Full article
(This article belongs to the Special Issue Intrusion/Malware Detection and Prevention in Networks—2nd Edition)
Show Figures

Figure 1

19 pages, 5823 KB  
Article
A Human-Centric AI-Enabled Ecosystem for SME Cybersecurity: Cross-Sectoral Practices and Adaptation Framework for Maritime Defence
by Kitty Kioskli, Eleni Seralidou, Wissam Mallouli, Dimitrios Koutras, Pedro Tomás and Dimitrios Kallergis
Electronics 2026, 15(7), 1520; https://doi.org/10.3390/electronics15071520 - 4 Apr 2026
Viewed by 250
Abstract
Artificial intelligence (AI) is increasingly integrated into cybersecurity tools to improve threat detection, anomaly identification, and incident response. However, organisations, particularly small- and medium-sized enterprises (SMEs), often struggle to discover, evaluate, and effectively use AI-enabled cybersecurity solutions due to skills gaps, usability challenges, [...] Read more.
Artificial intelligence (AI) is increasingly integrated into cybersecurity tools to improve threat detection, anomaly identification, and incident response. However, organisations, particularly small- and medium-sized enterprises (SMEs), often struggle to discover, evaluate, and effectively use AI-enabled cybersecurity solutions due to skills gaps, usability challenges, and fragmented tool ecosystems. This paper presents the advaNced cybErsecurity awaReness ecOsystem for SMEs (NERO), a human-centric cybersecurity ecosystem that combines a cybersecurity marketplace with a competency-based training and awareness platform to support the practical adoption of advanced cybersecurity technologies. The NERO Marketplace enables structured discovery, comparison, and assessment of cybersecurity tools based on usability, operational relevance, and competency alignment. Complementing this, the NERO Training Platform delivers modular, multi-modal training aligned with the European Cybersecurity Skills Framework (ECSF) to develop the human competencies required to operate advanced cybersecurity systems. This study contributes a socio-technical framework that addresses the gap between AI tool availability and organisational readiness through ECSF role-based competency mapping and iterative design-based evaluation. The platform targets technical roles like Cybersecurity Implementer to ensure training is aligned with the operational requirements of critical infrastructure protection. Results from cross-sector SME training activities show measurable improvements in cybersecurity awareness, knowledge, and user satisfaction, with knowledge gains exceeding 30% in some modules. Finally, the paper provides a structural mapping of these cross-sectoral results to the maritime defence domain, specifically addressing legacy OT systems and intermittent connectivity constraints. Full article
Show Figures

Figure 1

25 pages, 4125 KB  
Article
A Hybrid AVT-FVT Approach for Sensor Optimization in Structural Health Monitoring
by Michele Paoletti, Giovanni Paragliola and Carmelo Mineo
J. Sens. Actuator Netw. 2026, 15(2), 31; https://doi.org/10.3390/jsan15020031 - 1 Apr 2026
Viewed by 251
Abstract
This study presents a structured methodology for optimizing the placement and selection of accelerometer sensors for structural health monitoring in civil infrastructures. The approach integrates both ambient and forced vibration testing data, followed by a unified analysis of sensor energy distribution through singular [...] Read more.
This study presents a structured methodology for optimizing the placement and selection of accelerometer sensors for structural health monitoring in civil infrastructures. The approach integrates both ambient and forced vibration testing data, followed by a unified analysis of sensor energy distribution through singular value decomposition of the cross power spectral density. The energy associated with each sensor is normalized and decomposed into its vertical, longitudinal, and transversal components, allowing for detailed ranking and visualization across different structural elements such as the deck and supporting piers. A comparative analysis between the energy distributions obtained from ambient and forced vibrations is conducted to identify consistent sensor locations. The sensor configuration is then iteratively refined using a combination of global dynamic criteria and local spatial constraints to ensure both stability and optimal spatial distribution. The resulting framework enables the systematic design of sensor layouts that combine energy-based robustness with optimal spatial distribution across all three spatial components, while significantly reducing the number of required sensors, ensuring the preservation of damage detection capability and long-term structural health monitoring. Full article
Show Figures

Figure 1

27 pages, 1147 KB  
Article
Reducing Information Asymmetry in Software Product Management: An LLM-Based Reverse Engineering Framework
by Emre Surk, Gonca Gokce Menekse Dalveren and Mohammad Derawi
Appl. Sci. 2026, 16(6), 2801; https://doi.org/10.3390/app16062801 - 14 Mar 2026
Viewed by 351
Abstract
Although the transition from the Waterfall model to Agile practices has accelerated software delivery, it has often weakened documentation practices, contributing to persistent information asymmetry between Product Managers and Developers. This study introduces an LLM-based reverse engineering framework designed to assist product management [...] Read more.
Although the transition from the Waterfall model to Agile practices has accelerated software delivery, it has often weakened documentation practices, contributing to persistent information asymmetry between Product Managers and Developers. This study introduces an LLM-based reverse engineering framework designed to assist product management workflows by analyzing source code and generating enriched development tickets. The proposed Interactive Product Management Assistant leverages the long-context capabilities of Gemini 1.5 Pro together with a context-caching mechanism to analyze large codebases, identify ambiguities in product requests, highlight potential edge cases, detect possible cascading dependencies (“domino effects”), and generate code pointers that guide developers to relevant implementation areas. The framework was evaluated through case studies on several open-source projects, including WordPress, ERPNext, Ghost, and Odoo. The results suggest that the system can support requirement clarification, improve visibility of potential implementation impacts, and reduce exploratory effort during code analysis. In addition, the implemented preprocessing and caching mechanisms reduce analysis costs and improve operational efficiency during iterative interactions. Rather than providing a large-scale quantitative before-and-after comparison, this paper presents a qualitative case study and a proof-of-concept implementation to demonstrate the feasibility of the proposed approach. Overall, the findings demonstrate the feasibility of using LLM-assisted reverse engineering to support requirements analysis and product–developer collaboration, highlighting the potential of AI-based tools to complement traditional requirements engineering practices in complex software projects. Full article
Show Figures

Figure 1

16 pages, 3686 KB  
Article
Genome-Wide Association Study on Lodging Resistance-Related Traits in Oats
by Lijun Zhao, Rui Yang, Yantian Deng, Xiaopeng Zhang, Lijun Shi, Bai Du, Mengya Liu, Junmei Kang, Xiao Li and Tiejun Zhang
Plants 2026, 15(6), 861; https://doi.org/10.3390/plants15060861 - 11 Mar 2026
Viewed by 323
Abstract
Oat (Avena sativa L.), as an essential dual-purpose grain and forage crop, exhibits lodging resistance as a key factor directly impacting yield and quality. Therefore, breeding new oat varieties with lodging resistance is important to increase crop productivity and economic benefits. Using [...] Read more.
Oat (Avena sativa L.), as an essential dual-purpose grain and forage crop, exhibits lodging resistance as a key factor directly impacting yield and quality. Therefore, breeding new oat varieties with lodging resistance is important to increase crop productivity and economic benefits. Using 130 oat germplasm as materials, 7 lodging resistance-related traits of oat, including plant height (PH), the fresh weight of single stem (FWSS), the length of basal second internode (LBSI), diameter of basal second internode (DBSI), wall thickness of basal second internode (WTBSI), stem breaking strength (SBS), and stalk puncture strength (SPS), were investigated in two experimental sites for one year. The results indicate that the seven lodging resistance-related traits exhibit a continuous distribution overall and generally follow a typical distribution pattern. A total of 36,928,068 high-quality Single-nucleotide polymorphisms (SNPs) generated from whole-genome resequencing were used for genome-wide association study (GWAS). Based on the BLINK (Bayesian-information and Linkage-disequilibrium Iteratively Nested Keyway) model threshold (−log10(P) ≥ 6), 379 quantitative trait nucleotides (QTNs) associated with lodging resistance-related traits were identified. Among them, 38, 34, 78, 66, 55, 18, and 94 QTNs were associated with PH, FWSS, SBS, SPS, LBSI, DBSI, and WTBSI, respectively. Notably, three QTNs associated with FWSS and one QTN associated with SBS were stably detected across both environments, representing valuable markers for molecular breeding. From these loci, 54 candidate genes were annotated. Ranked by the number of candidate genes per trait, LBSI contained the highest number (14), followed by WTBSI (12), SPS (11), SBS (7), PH (5), and FWSS (5). Our findings provide critical support for analyzing the genetic mechanism of oat lodging resistance. Moreover, this study also offers a material and theoretical basis for the subsequent development of molecular markers and the breeding of new lodging-resistant oat varieties. Full article
(This article belongs to the Special Issue Cereal Crop Breeding, 2nd Edition)
Show Figures

Figure 1

37 pages, 984 KB  
Article
Co-Explainers: A Position on Interactive XAI for Human–AI Collaboration as a Harm-Mitigation Infrastructure
by Francisco Herrera, Salvador García, María José del Jesus, Luciano Sánchez and Marcos López de Prado
Mach. Learn. Knowl. Extr. 2026, 8(3), 69; https://doi.org/10.3390/make8030069 - 10 Mar 2026
Viewed by 733
Abstract
Human–AI collaboration (HAIC) increasingly mediates high-risk decisions in public and private sectors, yet many documented AI harms arise not only from model error but from breakdowns in joint human–AI work: miscalibrated reliance, impaired contestability, misallocated agency, and governance opacity. Conventional explainable AI (XAI) [...] Read more.
Human–AI collaboration (HAIC) increasingly mediates high-risk decisions in public and private sectors, yet many documented AI harms arise not only from model error but from breakdowns in joint human–AI work: miscalibrated reliance, impaired contestability, misallocated agency, and governance opacity. Conventional explainable AI (XAI) approaches, often delivered as static one-shot artifacts, are poorly matched to these sociotechnical dynamics. This paper is a position paper arguing that explainability should be reframed as a harm-mitigation infrastructure for HAIC: an interactive, iterative capability that supports ongoing sensemaking, safe handoffs of control, governance stakeholder roles and institutional accountability. We introduce co-explainers as a conceptual framework for interactive XAI, in which explanations are co-produced through structured dialogue, feedback, and governance-aware escalation (explain → feedback → update → govern). To ground this position, we synthesize prior harm taxonomies into six HAIC-oriented harm clusters and use them as heuristic design lenses to derive cluster-specific explainability requirements, including uncertainty communication, provenance and logging, contrastive “why/why-not” and counterfactual querying, role-sensitive justification, and recourse-oriented interaction protocols. We emphasize that co-explainers do not “mitigate” sociotechnical harms in isolation; rather, they provide an interface layer that makes harms more detectable, decisions more contestable, and accountability handoffs more operational under realistic constraints such as sealed models, dynamic updates, and value pluralism. We conclude with an agenda for evaluating co-explainers and aligning interactive XAI with governance frameworks in real-world HAIC deployments. Full article
(This article belongs to the Section Learning)
Show Figures

Figure 1

11 pages, 1219 KB  
Article
Application of the Novel Two-Compartmental Model to Quantify Coronary Artery Calcium: A Pilot Study
by Yu-Tai Shih, Zhe-Yu Lin and Jay Wu
J. Clin. Med. 2026, 15(5), 1997; https://doi.org/10.3390/jcm15051997 - 5 Mar 2026
Viewed by 291
Abstract
Background: Cardiovascular disease (CVD) remains a major global health concern and the leading cause of mortality and disability. Early detection and prevention strategies rely heavily on evaluating coronary artery calcification, traditionally assessed using the coronary artery calcium score (CACS). However, CACS is [...] Read more.
Background: Cardiovascular disease (CVD) remains a major global health concern and the leading cause of mortality and disability. Early detection and prevention strategies rely heavily on evaluating coronary artery calcification, traditionally assessed using the coronary artery calcium score (CACS). However, CACS is limited by its dependence on strictly fixed tube voltage and slice thickness, sensitivity to changes in scanning parameters, and the need for an additional dedicated coronary calcium scan that increases radiation exposure. Methods: To address these challenges, we developed a novel two-compartment coronary artery calcium score system (TACS) for quantitative calcium assessment. TACS was established and validated using a QRM Thorax phantom scanned on a GE Revolution CT at 70–140 kVp. Volumetric calcium density (VCD) derived from TACS was compared with conventional CACS under varying slice thickness, pitch, and iterative reconstruction algorithms. Additionally, coronary artery calcium scans from 15 patients were retrospectively analyzed to assess correlations between TACS and CACS. Results: TACS demonstrated stable performance across tube voltages, with VCD errors ranging from 3.8% to −19.0% and maintained consistency under different slice thicknesses (23.9% to −2.3%) and reconstruction algorithms, showing near-zero residual percentages. Patient analyses revealed a strong correlation between TACS and CACS (r = 0.932). Conclusions: These findings suggest that TACS provides robust and reliable quantification of coronary calcium, supporting its potential use for opportunistic coronary artery disease screening, particularly in routine CT imaging. Further studies with larger cohorts are warranted to confirm its clinical applicability. Full article
(This article belongs to the Special Issue Advances in Cardiovascular Computed Tomography (CT))
Show Figures

Figure 1

20 pages, 3984 KB  
Article
Mispronunciation Detection and Diagnosis for Non-Native Korean Learners Using Iterative Pseudo-Label Refinement Based on Self-Supervised Learning
by Na Geng, Hee-Jung Na, Jiyu Won, Dong-Gyu Kim and Jeong-Sik Park
Appl. Sci. 2026, 16(5), 2426; https://doi.org/10.3390/app16052426 - 2 Mar 2026
Viewed by 331
Abstract
Accurate Mispronunciation Detection and Diagnosis (MDD) for non-native Korean learners is critical for effective pronunciation feedback, but it is hindered by the lack of training labels that reflect learners’ actual pronunciations. This paper presents a pseudo-label generation framework that fine-tunes Whisper to output [...] Read more.
Accurate Mispronunciation Detection and Diagnosis (MDD) for non-native Korean learners is critical for effective pronunciation feedback, but it is hindered by the lack of training labels that reflect learners’ actual pronunciations. This paper presents a pseudo-label generation framework that fine-tunes Whisper to output pronunciation-oriented sequences, supported by data-quality management and iterative label refinement. We convert orthographic transcripts into pronunciation targets using an existing Grapheme-to-Phoneme (G2P) tool to reduce reliance on standard written forms, and apply multi-stage refinement with cross-model agreement validation under progressively adjusted thresholds to filter unreliable pseudo-labels. We further improve robustness by incorporating larger and more diverse non-native speech corpora and by applying dataset-specific preprocessing, including noise removal, duration-based selection, and duplicate control. Evaluation on a manually annotated test set of actual learner pronunciations shows that models trained with refined pseudo-labels achieved a lower Phoneme Error Rate (PER) and performed better than the baseline model on MDD. Overall, the proposed framework enables practical MDD for non-native Korean speech without requiring large-scale manual phoneme annotation. Full article
Show Figures

Figure 1

27 pages, 16399 KB  
Article
HiFrAMes: A Framework for Hierarchical Fragmentation and Abstraction of Molecular Graphs
by Yuncheng Yu, Max A. Smith, Haidong Wang and Jyh-Charn Liu
AI 2026, 7(2), 71; https://doi.org/10.3390/ai7020071 - 13 Feb 2026
Cited by 1 | Viewed by 753
Abstract
Recent advances in computational chemistry, machine learning, and large-scale virtual screening have rapidly expanded the accessible chemical space, increasing the need for interpretable molecular representations that capture the hierarchical topological structure of molecules. Existing formats, such as Simplified Molecular Input Line Entry System [...] Read more.
Recent advances in computational chemistry, machine learning, and large-scale virtual screening have rapidly expanded the accessible chemical space, increasing the need for interpretable molecular representations that capture the hierarchical topological structure of molecules. Existing formats, such as Simplified Molecular Input Line Entry System (SMILES) strings and MOL files, effectively encode molecular graphs but provide limited support for representing the multi-level structural information needed for complex downstream tasks. To address these challenges, we introduce HiFrAMes, a novel graph–theoretic hierarchical molecular fragmentation framework that decomposes molecular graphs into chemically meaningful substructures and organizes them into hierarchical scaffold representations. HiFrAMes is implemented as a four-stage pipeline consisting of leaf and ring chain extraction, ring mesh reduction, ring enumeration, and linker detection, which iteratively transforms raw molecular graphs into interpretable abstract objects. The framework decomposes molecules into chains, rings, linkers, and scaffolds while retaining global topological relationships. We apply HiFrAMes to both complex and drug-like molecules to generate molecular fragments and scaffold representations that capture structural motifs at multiple levels of abstraction. The resulting fragments are evaluated using selection criteria established in the fragment-based drug discovery literature and qualitative case studies to demonstrate their suitability for downstream computational tasks. Full article
Show Figures

Figure 1

30 pages, 10747 KB  
Article
Digital Twin Framework for Cutterhead Design and Assembly Process Simulation Optimization for TBM
by Abubakar Sharafat, Waqas Arshad Tanoli, Sung-hoon Yoo and Jongwon Seo
Appl. Sci. 2026, 16(4), 1865; https://doi.org/10.3390/app16041865 - 13 Feb 2026
Viewed by 390
Abstract
With the rapid advancement in information technology, the digital twin and smart assembly process simulation have become an integral part of the design and manufacturing of high-precision products. However, conventional Tunnel Boring Machine (TBM) cutterhead design and on-site assembly planning remain largely experience-driven [...] Read more.
With the rapid advancement in information technology, the digital twin and smart assembly process simulation have become an integral part of the design and manufacturing of high-precision products. However, conventional Tunnel Boring Machine (TBM) cutterhead design and on-site assembly planning remain largely experience-driven and fragmented, with limited interoperability between geological characterization, structural verification, and constructability validation. This study proposes a digital twin-driven framework for TBM cutterhead design optimization and assembly process simulation that integrates geology-aware design inputs, BIM-based information modelling, FEM-based structural assessment, and immersive virtual environments within a unified virtual–physical workflow. To ensure consistent data exchange across platforms, an IFC4.3-compliant ontology is established using a non-intrusive property-set (Pset) extension strategy to represent cutterhead components, geological parameters, FEM load cases/results, and assembly tasks. Tunnel-scale stress analysis and cutter–rock interaction modelling are used to define project-representative cutter loading envelopes, which are mapped to a high-fidelity cutterhead FEM model for iterative structural refinement. The optimized configuration is then transferred to a game-engine/VR environment to support full-scale design inspection and assembly rehearsal, followed by manufacturing and field deployment with bidirectional feedback. To validate the proposed framework, an implementation case study of a deep hard-rock tunnelling project is presented where five design iterations were tracked across BIM–FEM–VR and nine constructability issues detected and resolved prior to assembly. The results indicate that the proposed digital twin approach strengthens traceability from geology to loading to structural response, reduces localized stress concentration at critical interfaces, and improves assembly readiness for complex tunnelling projects. Full article
(This article belongs to the Special Issue Surface and Underground Mining Technology and Sustainability)
Show Figures

Figure 1

14 pages, 1935 KB  
Article
The Cardiologist Driving Synthetic AI: The TIMA Method for Clinically Supervised Synthetic Data Generation
by Gianmarco Parise, Roberto Ceravolo, Fabiana Lucà, Michele Massimo Gulizia, Cecilia Tetta, Orlando Parise, Federico Nardi, Massimo Grimaldi and Sandro Gelsomino
J. Clin. Med. 2026, 15(4), 1351; https://doi.org/10.3390/jcm15041351 - 9 Feb 2026
Viewed by 334
Abstract
Background/Objectives: Synthetic artificial intelligence (AI) is increasingly used in cardiovascular medicine to generate realistic clinical data from limited samples while preserving patient privacy. Despite its promise, concerns remain regarding the clinical reliability of synthetic datasets, which hampers their integration into routine practice. This [...] Read more.
Background/Objectives: Synthetic artificial intelligence (AI) is increasingly used in cardiovascular medicine to generate realistic clinical data from limited samples while preserving patient privacy. Despite its promise, concerns remain regarding the clinical reliability of synthetic datasets, which hampers their integration into routine practice. This article introduces the TIMA method (Team-Implementation Multidisciplinary Approach), designed to involve clinicians directly in every phase of synthetic data development. The objective of this work is to describe the TIMA framework and to illustrate how structured clinician–data scientist collaboration can enhance the clinical robustness and plausibility of synthetic AI outputs. Methods: The TIMA approach structures the synthetic data generation workflow around continuous interaction between clinicians and data scientists. Cardiologists define clinical constraints, verify inter-variable relationships, and assess the coherence and plausibility of generated records. The framework is illustrated through multiple cardiology use cases, including atrial fibrillation risk prediction and surgical mortality estimation in infective endocarditis, to demonstrate its adaptability across different clinical contexts. Each phase includes iterative validation steps aimed at ensuring alignment with established clinical knowledge rather than reporting quantitative performance outcomes. Results: Application of the TIMA framework supported the development of synthetic datasets that adhered more closely to clinical logic and domain-specific constraints. Clinician–data scientist collaboration enabled early detection of implausible variable interactions, improved interpretability of synthetic data patterns, and enhanced internal consistency across different cardiology-oriented scenarios. Conclusions: TIMA represents a scalable and replicable methodological model for integrating synthetic AI into cardiology by embedding clinical expertise throughout the data generation process. Its structured, multidisciplinary workflow supports the production of synthetic data that is not only statistically coherent but also clinically meaningful, thereby strengthening trust and reliability in AI-assisted cardiovascular research. Full article
Show Figures

Figure 1

31 pages, 2905 KB  
Article
HIV Membrane-Proximal External Region Scaffolded Immunogen as Killed Whole-Cell Genome-Reduced Vaccines
by Juan Sebastian Quintero-Barbosa, Yufeng Song, Frances Mehl, Shubham Mathur, Lauren Livingston, Peter D. Kwong, Xiaoying Shen, David C. Montefiori and Steven L. Zeichner
Viruses 2026, 18(2), 209; https://doi.org/10.3390/v18020209 - 5 Feb 2026
Viewed by 937
Abstract
Background: Killed Whole Cell Genome-Reduced Bacteria (KWC/GRB), a versatile vaccine platform, can produce very low cost, thermostable, easily manufactured vaccines expressing complex immunogens that include potent immunomodulators. This system supports iterative optimization through a Design–Build–Test–Learn (DBTL) workflow aimed at enhancing immunogenicity. We applied [...] Read more.
Background: Killed Whole Cell Genome-Reduced Bacteria (KWC/GRB), a versatile vaccine platform, can produce very low cost, thermostable, easily manufactured vaccines expressing complex immunogens that include potent immunomodulators. This system supports iterative optimization through a Design–Build–Test–Learn (DBTL) workflow aimed at enhancing immunogenicity. We applied this approach to developing HIV-1 gp41 Membrane-Proximal External Region (MPER) vaccines using the scaffolded MPER antigen, 3AGJ, a recombinant heterologous protein engineered to mimic MPER structures recognized by broadly neutralizing monoclonal antibodies (bNAbs). Methods: Five KWC/GRB vaccines expressing versions of 3AGJ were designed, including versions linked to immunomodulators and multimers of the immunogen. Display on the surface of the bacteria was evaluated by flow cytometry using the broadly neutralizing monoclonal antibody 2F5. Outbred HET3 mice were vaccinated intramuscularly, and MPER-specific antibody responses were assessed by ELISA and by the ability of the vaccines to induce neutralizing antibodies. Neutralization was measured against tier 1 and tier 2 HIV-1 pseudoviruses. Results: All five vaccines were strongly expressed on the bacterial surface and induced clear MPER-specific antibody responses in every mouse. About 33% of the animals showed detectable HIV-1 neutralization. Conclusions: These results demonstrate that a KWC/GRB-based scaffold-MPER (3AGJ) vaccine can elicit HIV-1 neutralizing antibodies in a subset of animals. Although further optimization will be required to improve the consistency and magnitude of neutralizing responses, the findings provide an initial validation of the concept. There are many strategies that can be used to enhance and extend immune responses induced by KWC/GRB vaccines that can be employed to yield improved anti-HIV-1 immune responses. Full article
(This article belongs to the Section Viral Immunology, Vaccines, and Antivirals)
Show Figures

Graphical abstract

22 pages, 8660 KB  
Article
Detection of Hidden Pest Rice Weevil (Sitophilus oryzae) in Wheat Kernels Using Hyperspectral Imaging
by Lei Yan, Taoying Luo, Chao Zhao, Honglin Ma, Yufei Wu, Chunqi Bai and Zibo Zhu
Foods 2026, 15(3), 566; https://doi.org/10.3390/foods15030566 - 5 Feb 2026
Viewed by 328
Abstract
The rice weevil (Sitophilus oryzae) is a major pest in stored wheat, and traditional detection methods face challenges in identifying its hidden life stages within kernels. This study develops a nondestructive method to detect S. oryzae (Sitophilus oryzae) infestation [...] Read more.
The rice weevil (Sitophilus oryzae) is a major pest in stored wheat, and traditional detection methods face challenges in identifying its hidden life stages within kernels. This study develops a nondestructive method to detect S. oryzae (Sitophilus oryzae) infestation in wheat kernels using hyperspectral imaging, spectral preprocessing, feature extraction, and classification modeling. Hyperspectral data were collected from wheat kernels at different infestation stages (1, 11, 21, and 25 days (d)) and from healthy kernels. Spectral quality was optimized using SG smoothing, multiplicative scatter correction (MSC), and standard normal variate transformation (SNV). Feature extraction algorithms, including Competitive Adaptive Re-weighting Algorithm (CARS), Successive Projection Algorithm (SPA), and Iterative Retention of Information Variables (IRIV), were used to reduce data dimensionality, while classification models like Decision Tree (DT), K-nearest neighbors (KNN), and Support Vector Machine (SVM) were applied. The results show that MSC preprocessing provides the best performance among the models. After feature band selection, the MSC-CARS-SVM model achieved the highest accuracy for the 1 day and 25 d samples (95.48% and 96.61%, respectively). For the 11 d and 21 d samples, the MSC-IRIV-SPA-SVM model achieved the best performance with accuracies of 94.35% and 94.92%, respectively. This study demonstrates that MSC effectively reduces spectral noise and improves classification performance. After feature selection, the model shows significant improvements in both accuracy and stability. The study confirms the feasibility of using hyperspectral technology to identify healthy and S. oryzae-infested wheat kernels, providing theoretical support for early, nondestructive pest detection. Full article
(This article belongs to the Section Food Analytical Methods)
Show Figures

Figure 1

28 pages, 5323 KB  
Article
Design and Simulation Analysis of a Temperature Control System for Real-Time Quantitative PCR Instruments Based on Key Hot Air Circulation and Temperature Field Regulation Technologies
by Zhe Wang, Yue Zhao, Yan Wang, Chunxiang Shi, Zizhao Zhao, Qimeng Chen, Lemin Shi, Xiangkai Meng, Hao Zhang and Yuanhua Yu
Micromachines 2026, 17(2), 169; https://doi.org/10.3390/mi17020169 - 28 Jan 2026
Viewed by 783
Abstract
To address the technical bottlenecks commonly encountered with real-time quantitative PCR instruments, such as insufficient ramp rates and uneven chamber temperature distribution, this study proposes an innovative design scheme for a temperature control system that incorporates key hot air circulation and temperature field [...] Read more.
To address the technical bottlenecks commonly encountered with real-time quantitative PCR instruments, such as insufficient ramp rates and uneven chamber temperature distribution, this study proposes an innovative design scheme for a temperature control system that incorporates key hot air circulation and temperature field regulation technologies. By combining the PCR instruments’ working principles and structural characteristics, the failure mechanisms associated with the temperature control system are systematically analyzed, and a reliability-oriented thermodynamic analysis model is constructed to clarify the functional positioning of core components and to systematically test the airflow uniformity, temperature dynamics, and nucleic acid amplification efficiency. An integrated fixture for airflow rectifier and cruciform frames is designed, which enables precise quantitative characterization of the system temperature uniformity, ramp rates, and amplification efficiency on a multi-condition comparison platform. Through modeling analysis combined with experimental validation, the thermal performance differences among various heating chamber structures are compared, leading to a multidimensional optimization of the temperature control system. The test results demonstrate outstanding core performance metrics for the optimized system: the up ramp reaches 7.5 ± 0.1 °C/s, the down ramp reaches 13.5 ± 0.1 °C/s, and the steady-state temperature deviation is only ±0.1 °C. The total duration for 35 PCR cycles is recorded at 16.3 ± 0.6 min, with a nucleic acid amplification efficiency of 98.9 ± 0.2%. The core performance metrics comprehensively surpass those of mainstream global counterparts. The developed temperature control system is well-suited for practical applications such as rapid detection, providing critical technological support for the iterative upgrade of nucleic acid amplification techniques while laying a solid foundation for the engineering development of high-performance PCR instruments. Full article
Show Figures

Figure 1

25 pages, 995 KB  
Article
Design Requirements of a Novel Wearable System for Safety and Performance Monitoring in Women’s Soccer
by Denise Bentivoglio, Giulia Maria Castiglioni, Cecilia Mazzola, Alice Viganò and Giuseppe Andreoni
Appl. Sci. 2026, 16(3), 1259; https://doi.org/10.3390/app16031259 - 26 Jan 2026
Viewed by 726
Abstract
Female soccer is rapidly becoming a widely practiced sport at different levels: this opens up a new demand for systems meant to protect athletes from head impacts or to monitor their effects. The market is offering some solutions in similar sports, but the [...] Read more.
Female soccer is rapidly becoming a widely practiced sport at different levels: this opens up a new demand for systems meant to protect athletes from head impacts or to monitor their effects. The market is offering some solutions in similar sports, but the specificity and high relevance of soccer encourage the development of a dedicated solution. From market analysis, technology scouting, and ethnographic research a set of functional and technical requirements have been defined and proposed. The designed instrumented head band is equipped with one Inertial Measurement Unit (IMU) in the occipital area and four contact pressure sensors on the sides. The concept design is low-cost and open-architecture, prioritizing accessibility over complexity. The modularity also ensures that each component (sensing, battery, communication) can be replaced or upgraded independently, enabling iterative refinement and integration into future sports safety systems. In addition to safety monitoring for injury prevention or detection of the traumatic impact, the system is relevant for supporting performance monitoring, rehabilitation or post-injury recovery and other important applications. System engineering has started and the next step is building the prototypes for testing and validation. Full article
(This article belongs to the Special Issue Wearable Devices: Design and Performance Evaluation)
Show Figures

Figure 1

Back to TopTop