Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (837)

Search Parameters:
Keywords = search query

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 1353 KB  
Article
SLTP: A Symbolic Travel-Planning Agent Framework with Decoupled Translation and Heuristic Tree Search
by Debin Tang, Qian Jiang, Jingpu Yang, Jingyu Zhao, Xiaofei Du, Miao Fang and Xiaofei Zhang
Electronics 2026, 15(2), 422; https://doi.org/10.3390/electronics15020422 (registering DOI) - 18 Jan 2026
Abstract
Large language models (LLMs) demonstrate outstanding capability in understanding natural language and show great potential in open-domain travel planning. However, when confronted with multi-constraint itineraries, personalized recommendations, and scenarios requiring rigorous external information validation, pure LLM-based approaches lack rigorous planning ability and fine-grained [...] Read more.
Large language models (LLMs) demonstrate outstanding capability in understanding natural language and show great potential in open-domain travel planning. However, when confronted with multi-constraint itineraries, personalized recommendations, and scenarios requiring rigorous external information validation, pure LLM-based approaches lack rigorous planning ability and fine-grained personalization. To address these gaps, we propose the Symbolic LoRA Travel Planner (SLTP) framework—an agent architecture that combines a two-stage symbol-rule LoRA fine-tuning pipeline with a user multi-option heuristic tree search (MHTS) planner. SLTP decomposes the entire process of transforming natural language into executable code into two specialized, sequential LoRA experts: the first maps natural-language queries to symbolic constraints with high fidelity; the second compiles symbolic constraints into executable Python planning code. After reflective verification, the generated code serves as constraints and heuristic rules for an MHTS planner that preserves diversified top-K candidate itineraries and uses pruning plus heuristic strategies to maintain search-time performance. To overcome the scarcity of high-quality intermediate symbolic data, we adopt a teacher–student distillation approach: a strong teacher model generates high-fidelity symbolic constraints and executable code, which we use as hard targets to distill knowledge into an 8B-parameter Qwen3-8B student model via two-stage LoRA. On the ChinaTravel benchmark, SLTP using an 8B student achieves performance comparable to or surpassing that of other methods built on DeepSeek-V3 or GPT-4o as a backbone. Full article
(This article belongs to the Special Issue AI-Powered Natural Language Processing Applications)
27 pages, 613 KB  
Systematic Review
AI-Powered Vulnerability Detection and Patch Management in Cybersecurity: A Systematic Review of Techniques, Challenges, and Emerging Trends
by Malek Malkawi and Reda Alhajj
Mach. Learn. Knowl. Extr. 2026, 8(1), 19; https://doi.org/10.3390/make8010019 - 15 Jan 2026
Viewed by 212
Abstract
With the increasing complexity of cyber threats and the inefficiency of traditional vulnerability management, artificial intelligence has been increasingly integrated into cybersecurity. This review provides a comprehensive evaluation of AI-powered strategies including machine learning, deep learning, and large language models for identifying cybersecurity [...] Read more.
With the increasing complexity of cyber threats and the inefficiency of traditional vulnerability management, artificial intelligence has been increasingly integrated into cybersecurity. This review provides a comprehensive evaluation of AI-powered strategies including machine learning, deep learning, and large language models for identifying cybersecurity vulnerabilities and supporting automated patching. In this review, we conducted a synthesis and appraisal of 29 peer-reviewed studies published between 2019 and 2024. Our results indicate that AI methods substantially improve the precision of detection, scalability, and response speed compared with human-driven and rule-based approaches. We detail the transition from conventional ML categorization to using deep learning for source code analysis and dynamic network detection. Moreover, we identify advanced mitigation strategies such as AI-powered prioritization, neuro-symbolic AI, deep reinforcement learning and the generative abilities of LLMs which are used for automated patch suggestions. To strengthen methodological rigor, this review followed a registered protocol and PRISMA-based study selection, and it reports reproducible database searches (exact queries and search dates) and transparent screening decisions. We additionally assessed the quality and risk of bias of included studies using criteria tailored to AI-driven vulnerability research (dataset transparency, leakage control, evaluation rigor, reproducibility, and external validation), and we used these quality results to contextualize the synthesis. Our critical evaluation indicates that this area remains at an early stage and is characterized by significant gaps. The absence of standard benchmarks, limited generalizability of the models to various domains, and lack of adversarial testing are the obstacles that prevent adoption of these methods in real-world scenarios. Furthermore, the research suggests that the black-box nature of most models poses a serious problem in terms of trust. Thus, XAI is quite pertinent in this context. This paper serves as a thorough guide for the evolution of AI-driven vulnerability management and indicates that next-generation AI systems should not only be more accurate but also transparent, robust, and generalizable. Full article
(This article belongs to the Section Thematic Reviews)
Show Figures

Figure 1

15 pages, 3341 KB  
Article
Probabilistic Modeling and Pattern Discovery-Based Sindhi Information Retrieval System
by Dil Nawaz Hakro, Abdullah Abbasi, Anjum Zameer Bhat, Saleem Raza, Muhammad Babar and Osama Al Rahbi
Information 2026, 17(1), 82; https://doi.org/10.3390/info17010082 - 13 Jan 2026
Viewed by 108
Abstract
Natural language processing is the technology used to interact with computers using human languages. An overlapping technology is Information Retrieval (IR), in which a user searches for the demanded or required documents from among a number of documents that are already stored. The [...] Read more.
Natural language processing is the technology used to interact with computers using human languages. An overlapping technology is Information Retrieval (IR), in which a user searches for the demanded or required documents from among a number of documents that are already stored. The required document is retrieved according to the relevance of the query of the user, and the results are presented in descending order. Many of the languages have their own IR systems, whereas a dedicated IR system for Sindhi still needs attention. Various approaches to effective information retrieval have been proposed. As Sindhi is an old language with a rich history and literature, it needs IR. For the development of Sindhi IR, a document database is required so that the documents can be retrieved accordingly. Many Sindhi documents were identified and collected from various sources, such as books, journal, magazines, and newspapers. These documents were identified as having potential for use in indexing and other forms of processing. Probabilistic modeling and pattern discovery were used to find patterns and for effective retrieval and relevancy. The results for Sindhi Information Retrieval systems are promising and presented more than 90% relevancy. The time elapsed was recorded as ranging from 0.2 to 4.8 s for a single word and 4.6 s with a Sindhi sentence, with the same starting time of 0.2 s. The IR system for Sindhi can be fine-tuned and utilized for other languages with the same characteristics, which adopt Arabic script. Full article
Show Figures

Graphical abstract

44 pages, 9272 KB  
Systematic Review
Toward a Unified Smart Point Cloud Framework: A Systematic Review of Definitions, Methods, and a Modular Knowledge-Integrated Pipeline
by Mohamed H. Salaheldin, Ahmed Shaker and Songnian Li
Buildings 2026, 16(2), 293; https://doi.org/10.3390/buildings16020293 - 10 Jan 2026
Viewed by 277
Abstract
Reality-capture has made point clouds a primary spatial data source, yet processing and integration limits hinder their potential. Prior reviews focus on isolated phases; by contrast, Smart Point Clouds (SPCs)—augmenting points with semantics, relations, and query interfaces to enable reasoning—received limited attention. This [...] Read more.
Reality-capture has made point clouds a primary spatial data source, yet processing and integration limits hinder their potential. Prior reviews focus on isolated phases; by contrast, Smart Point Clouds (SPCs)—augmenting points with semantics, relations, and query interfaces to enable reasoning—received limited attention. This systematic review synthesizes the state-of-the-art SPC terminology and methods to propose a modular pipeline. Following PRISMA, we searched Scopus, Web of Science, and Google Scholar up to June 2025. We included English-language studies in geomatics and engineering presenting novel SPC methods. Fifty-eight publications met eligibility criteria: Direct (n = 22), Indirect (n = 22), and New Use (n = 14). We formalize an operative SPC definition—queryable, ontology-linked, provenance-aware—and map contributions across traditional point cloud processing stages (from acquisition to modeling). Evidence shows practical value in cultural heritage, urban planning, and AEC/FM via semantic queries, rule checks, and auditable updates. Comparative qualitative analysis reveals cross-study trends: higher and more uniform density stabilizes features but increases computation, and hybrid neuro-symbolic classification improves long-tail consistency; however, methodological heterogeneity precluded quantitative synthesis. We distill a configurable eight-module pipeline and identify open challenges in data at scale, domain transfer, temporal (4D) updates, surface exports, query usability, and sensor fusion. Finally, we recommend lightweight reporting standards to improve discoverability and reuse. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

17 pages, 4473 KB  
Article
RAG-Based Natural Language Interface for Goal-Oriented Knowledge Graphs and Its Evaluation
by Kosuke Yano, Yoshinobu Kitamura and Kazuhiro Kuwabara
Information 2026, 17(1), 55; https://doi.org/10.3390/info17010055 - 7 Jan 2026
Viewed by 251
Abstract
Procedural knowledge is essential in specialized domains, and natural language tools for retrieving procedural knowledge are necessary for non-expert users to facilitate their understanding and learning. In this study, we focus on function decomposition trees, a framework for representing procedural knowledge, and propose [...] Read more.
Procedural knowledge is essential in specialized domains, and natural language tools for retrieving procedural knowledge are necessary for non-expert users to facilitate their understanding and learning. In this study, we focus on function decomposition trees, a framework for representing procedural knowledge, and propose a natural language interface leveraging Retrieval-Augmented Generation (RAG). The natural language interface converts the user’s inputs into SPARQL queries, retrieving relevant data and subsequently presenting them in an accessible and chat-based format. Such a flexible and purpose-driven search facilitates users’ understanding of functions of artifacts or human actions and their performance of these actions. We demonstrate that the tool effectively retrieves actions, goals, and dependencies using an illustrative real-world example of a function decomposition tree. In addition, we evaluated the system by comparing it with ChatGPT 4o and Microsoft GraphRAG. The results suggest that the system can deliver responses that are both necessary and sufficient for users’ needs, while the outputs of other systems lack the key elements and return redundant information. Full article
Show Figures

Figure 1

28 pages, 4228 KB  
Article
Optimizing Access to Interoperability Resources in Mobility Through Context-Aware Large Language Models (LLMs)
by Sudarsana Varma Mandapati, Vishal C. Kummetha, Sisinnio Concas and Lisa Staes
Electronics 2026, 15(1), 152; https://doi.org/10.3390/electronics15010152 - 29 Dec 2025
Viewed by 406
Abstract
This study presents the development and implementation of a functional system that utilizes large language models (LLMs) to improve the identification, organization, and retrieval of mobility interoperability resources. The established framework assists novice and experienced implementers of mobility services such as planning organizations [...] Read more.
This study presents the development and implementation of a functional system that utilizes large language models (LLMs) to improve the identification, organization, and retrieval of mobility interoperability resources. The established framework assists novice and experienced implementers of mobility services such as planning organizations and multimodal transportation agencies to efficiently access interoperability resources, such as standards and case studies, which are often dispersed and difficult to navigate. The web-based system includes a backend that generates abstracts and tags and a frontend that supports manual or chatbot-based search. A prompt-refinement mechanism suggests improved queries within the context of mobility interoperability when no matches are found. To validate the quality of LLM-generated abstracts and tags, subject matter experts reviewed outputs from multiple prompt iterations to assess accuracy and clarity. Of the 82 resources evaluated, 72% of abstracts met expert expectations for relevance, while 91% of the tags were considered appropriate. A comprehensive case study of 330 representative user queries was also conducted to evaluate the chatbot’s output. Overall, the presented framework aims to reduce cataloging effort, improve classification consistency, and improve accessibility to relevant information. With minimal setup costs, the system offers a scalable and cost-effective solution for managing large, uncatalogued repositories. Full article
Show Figures

Figure 1

22 pages, 1923 KB  
Article
DS-CKDSE: A Dual-Server Conjunctive Keyword Dynamic Searchable Encryption with Forward and Backward Security
by Haiyan Sun, Yihua Liu, Yanhua Zhang and Chaoyang Li
Entropy 2026, 28(1), 25; https://doi.org/10.3390/e28010025 - 24 Dec 2025
Viewed by 226
Abstract
Dynamic Searchable Encryption (DSE) is essential for enabling confidential search operations over encrypted data in cloud computing. However, all existing single-server DSE schemes are vulnerable to Keyword Pair Result Pattern (KPRP) leakage and fail to simultaneously achieve forward and backward security. To address [...] Read more.
Dynamic Searchable Encryption (DSE) is essential for enabling confidential search operations over encrypted data in cloud computing. However, all existing single-server DSE schemes are vulnerable to Keyword Pair Result Pattern (KPRP) leakage and fail to simultaneously achieve forward and backward security. To address these challenges, this paper proposes a conjunctive keyword DSE scheme based on a dual-server architecture (DS-CKDSE). By integrating a full binary tree with an Indistinguishable Bloom Filter (IBF), the proposed scheme adopts a secure index: The leaf nodes store the keywords and the associated file identifier, while the information of non-leaf nodes is encoded within the IBF. A random state update mechanism, a dual-state array for each keyword and the timestamp trapdoor designs jointly enable robust forward and backward security while supporting efficient conjunctive queries. The dual-server architecture mitigates KPRP leakage by separating secure index storage from trapdoor verification. The security analysis shows that the new scheme satisfies adaptive security under a defined leakage function. Finally, the performance of the proposed scheme is evaluated through experiments, and the results demonstrate that the new scheme enjoys high efficiency in both update and search operations. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

18 pages, 428 KB  
Article
Enhancing Education Through Generative AI: A Multimodal Approach to Semantic Search and Authentic Learning
by Ahmad Raza, Amina Jameel and Freeha Azmat
Educ. Sci. 2026, 16(1), 22; https://doi.org/10.3390/educsci16010022 - 24 Dec 2025
Viewed by 221
Abstract
In contemporary education, learners face the challenge of navigating an overwhelming abundance of information. Traditional search methods, often limited to keyword matching, fail to capture the nuanced meaning and relationships within educational materials. Our multimodal approach combines Sentence Transformer for text and Inception [...] Read more.
In contemporary education, learners face the challenge of navigating an overwhelming abundance of information. Traditional search methods, often limited to keyword matching, fail to capture the nuanced meaning and relationships within educational materials. Our multimodal approach combines Sentence Transformer for text and Inception V3 for images to generate vector embeddings for textbooks which are stored in an Elasticsearch database. Learners’ queries again are converted to vector embeddings which are matched through cosine similarity with stored embeddings, resulting in retrieval of relevant material which is ranked and then synthesized using large language model (LLM) APIs. The approach retrieves answers based on semantic search rather than keywords. The system also integrates GenAI capabilities separately, specifically leveraging LLM APIs, to generate context-aware answers to user-posed questions at varying levels of complexity, e.g., beginner, intermediate, and advanced. Through comprehensive evaluation, we demonstrate the system’s ability to retrieve coherent answers across multiple sources, offering significant advancements in cross-text and cross-modal retrieval tasks. This work also contributes to the international discourse on ethical GenAI integration in curricula and fosters a collaborative human–AI learning ecosystem. Full article
(This article belongs to the Special Issue Generative-AI-Enhanced Learning Environments and Applications)
Show Figures

Figure 1

20 pages, 2188 KB  
Article
SAQ-YOLO: An Efficient Small Object Detection Model for Unmanned Aerial Vehicle in Maritime Search and Rescue
by Sichen Li, Hao Yi, Shengyi Chen, Xinmin Chen, Mao Xu and Feifan Yu
Appl. Sci. 2026, 16(1), 131; https://doi.org/10.3390/app16010131 - 22 Dec 2025
Viewed by 299
Abstract
In Search and Rescue (SAR) missions, UAVs must be capable of detecting small objects from complex and noise-prone maritime images. Existing small object detection methods typically rely on super-resolution techniques or complex structural designs, which often demand significant computational resources and fail to [...] Read more.
In Search and Rescue (SAR) missions, UAVs must be capable of detecting small objects from complex and noise-prone maritime images. Existing small object detection methods typically rely on super-resolution techniques or complex structural designs, which often demand significant computational resources and fail to meet the real-time requirements for small mobile devices in SAR tasks. To address this challenge, we propose SAQ-YOLO, an efficient small object detection model based on the YOLO framework. We design a Small Object Auxiliary Query branch, which uses deep semantic information to guide the fusion of shallow features, thereby improving small object capture efficiency. Additionally, SAQ-YOLO incorporates a series of lightweight channel, spatial, and group (large kernel) gated attention mechanisms to suppress background clutter in complex maritime environments, enhancing feature extraction at a low computational cost. Experiments on the SeaDronesSee dataset demonstrate that, compared to YOLOv11s, SAQ-YOLO reduces the number of parameters by approximately 70% while increasing mAP@50 by 2.1 percentage points. Compared to YOLOv11n, SAQ-YOLO improves mAP@50 by 8.7 percentage points. When deployed on embedded platforms, SAQ-YOLO achieves an inference latency of only 35 milliseconds per frame, meeting the real-time requirements of maritime SAR applications. These results suggest that SAQ-YOLO provides an efficient and deployable solution for UAV SAR operations in vast and highly dynamic marine environments. Future work will focus on enhancing the robustness of the detection model. Full article
Show Figures

Figure 1

31 pages, 697 KB  
Article
An LLM–MCDM Framework with Lin’s Concordance Correlation Coefficient for Recommendation Systems: A Case Study in Food Preference
by Thanathorn Phoka, Thanwa Wathahong and Pornpimon Boriwan
Appl. Sci. 2026, 16(1), 117; https://doi.org/10.3390/app16010117 - 22 Dec 2025
Viewed by 310
Abstract
Food recommender systems are pivotal in helping people make optimal dietary choices based on tremendous amounts of data. Extant studies offer different methods and techniques, but the combination of similarity search, large language models (LLMs), and multi-criteria decision-making (MCDM) remains underexplored. This study [...] Read more.
Food recommender systems are pivotal in helping people make optimal dietary choices based on tremendous amounts of data. Extant studies offer different methods and techniques, but the combination of similarity search, large language models (LLMs), and multi-criteria decision-making (MCDM) remains underexplored. This study proposes a new system that leverages all three. First, we utilize an LLM to suggest queries from the same domain as the dish database. Then, the queries are vectorized and used for similarity search to generate a preliminary list of suggested menu items. Next, multiple LLMs provide scores for each item, which become the MCDM inputs, where Lin’s concordance correlation coefficient (LCCC) enhances the weighted sum scalarization technique. We evaluated the prototype on three publicly available dish datasets and at classification thresholds of 0.25, 0.50, and 0.75, and the proposed domain-adaptation approach consistently outperformed the baseline query. For example, at the 0.50 threshold, precision ranged from 49.11% to 56.60%, compared with 35.40% for the baseline. Furthermore, aggregating multiple LLMs mitigates single-model bias in recommendations. To substantiate this, a bootstrap evaluation of the proposed LCCC-based consensus weighting confirms that both the estimated weights and the induced rankings are numerically stable under sampling perturbations. To further ensure the robustness and reliability of the proposed system, we validate the results against other established weighting schemes and state-of-the-art MCDM methods. Moreover, Kendall’s τ-based comparisons across weighting schemes and multiple MCDM methods confirm that the proposed LCCC-based framework produces highly consistent and statistically significant rankings, demonstrating strong robustness to methodological choices. This paper contributes a system architecture and design that can be adopted for other domains of recommender systems where the capability of multiple LLMs can benefit complex and multifaceted decision-making processes. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

52 pages, 782 KB  
Article
Single-Stage Causal Incentive Design via Optimal Interventions
by Sebastián Bejos, Eduardo F. Morales, Luis Enrique Sucar and Enrique Munoz de Cote
Entropy 2026, 28(1), 4; https://doi.org/10.3390/e28010004 - 19 Dec 2025
Viewed by 303
Abstract
We introduce Causal Incentive Design (CID), a framework that applies causal inference to canonical single-stage principal–agent problems (PAPs) characterized by bilateral private information. Within CID, the operating rules of PAPs are formalized using an additive-noise causal graphical model (CGM). Incentives are modeled as [...] Read more.
We introduce Causal Incentive Design (CID), a framework that applies causal inference to canonical single-stage principal–agent problems (PAPs) characterized by bilateral private information. Within CID, the operating rules of PAPs are formalized using an additive-noise causal graphical model (CGM). Incentives are modeled as interventions on a function space variable, Γ, which correspond to policy interventions in the principal–follower causal relation. The causal inference target estimand V(Γ) is defined as the expected value of the principal’s utility variable under a specified policy intervention in the post-intervention distribution. In the context of additive-Gaussian independent noise, the estimand V(Γ) decomposes into a two-layer expectation: (i) an inner Gaussian smoothing of the principal’s utility regression; and (ii) an outer averaging over the conditional probability of the follower’s action given the incentive policy. A Gauss–Hermite quadrature method is employed to efficiently estimate the first layer, while a policy-local kernel reweighting approach is used for the second. For offline selection of a single incentive policy, a Functional Causal Bayesian Optimization (FCBO) algorithm is introduced. This algorithm models the objective functional γV(γ) using a functional Gaussian process surrogate defined on a Reproducing Kernel Hilbert Space (RKHS) domain and utilizes an Upper Confidence Bound (UCB) acquisition functional. Consequently, the policy value V(γ) becomes an interventional query that can be answered using offline observational data under standard identifiability assumptions. High-probability cumulative-regret bounds are established in terms of differential information gain for the proposed FBO algorithm. Collectively, these elements constitute the central contributions of the CID framework, which integrates causal inference through identification and estimation with policy search in principal–agent problems under private information. This approach establishes a causal decision-making pipeline that enables commitment to a high-performing incentive in a single-shot game, supported by regret guarantees. Provided that the data used for estimation is sufficient, the resulting offline pipeline is appropriate for scenarios where adaptive deployment is impractical or costly. Beyond the methodological contribution, this work introduces a novel application of causal graphical models and causal reasoning to incentive design and principal–agent problems, which are central to economics and multi-agent systems. Full article
(This article belongs to the Special Issue Causal Graphical Models and Their Applications)
Show Figures

Figure 1

19 pages, 623 KB  
Article
Early-Stage Graph Fusion with Refined Graph Neural Networks for Semantic Code Search
by Longhao Ao and Rongzhi Qi
Appl. Sci. 2026, 16(1), 12; https://doi.org/10.3390/app16010012 - 19 Dec 2025
Viewed by 331
Abstract
Code search has received significant attention in the field of computer science research. Its core objective is to retrieve the most semantically relevant code snippets by aligning the semantics of natural language queries with those of programming languages, thereby contributing to improvements in [...] Read more.
Code search has received significant attention in the field of computer science research. Its core objective is to retrieve the most semantically relevant code snippets by aligning the semantics of natural language queries with those of programming languages, thereby contributing to improvements in software development quality and efficiency. As the scale of public code repositories continues to expand rapidly, the ability to accurately understand and efficiently match relevant code has become a critical challenge. Furthermore, while numerous studies have demonstrated the efficacy of deep learning in code-related tasks, the mapping and semantic correlations are often inadequately addressed, leading to the disruption of structural integrity and insufficient representational capacity during semantic matching. To overcome these limitations, we propose the Functional Program Graph for Code Search (called FPGraphCS), a novel code search method that leverages the construction of functional program graphs and an early fusion strategy. By incorporating abstract syntax tree (AST), data dependency graph (DDG), and control flow graph (CFG), the method constructs a comprehensive multigraph representation, enriched with contextual information. Additionally, we propose an improved metapath aggregation graph neural network (IMAGNN) model for the extraction of code features with complex semantic correlations from heterogeneous graphs. Through the use of metapath-associated subgraphs and dynamic metapath selection via a graph attention mechanism, FPGraphCS significantly enhances its search capability. The experimental results demonstrate that FPGraphCS outperforms existing baseline methods, achieving an MRR of 0.65 and ACC@10 of 0.842, showing a significant improvement over previous approaches. Full article
Show Figures

Figure 1

13 pages, 5548 KB  
Article
Evolution Landscape of PiggyBac (PB) Transposon in Beetles (Coleoptera)
by Quan Wang, Shasha Shi, Bingqing Wang, Xin Chen, Naisu Yang, Bo Gao and Chengyi Song
Genes 2025, 16(12), 1521; https://doi.org/10.3390/genes16121521 - 18 Dec 2025
Viewed by 389
Abstract
Background/Objectives: The PB family of “cut-and-paste” DNA transposons shows great promise as genetic manipulation tools while significantly impacting eukaryotic genome evolution. However, their evolutionary profile in beetles (Coleoptera), the most species-rich animal order, remains poorly characterized. Methods: A local tBLASTN search [...] Read more.
Background/Objectives: The PB family of “cut-and-paste” DNA transposons shows great promise as genetic manipulation tools while significantly impacting eukaryotic genome evolution. However, their evolutionary profile in beetles (Coleoptera), the most species-rich animal order, remains poorly characterized. Methods: A local tBLASTN search was conducted to mine PiggyBac (PB) transposons across 136 coleopteran insect genomes, using the DDE domain of the PB transposase as the query. Multiple sequence alignment was performed with MAFFT, and a maximum likelihood phylogenetic tree of the transposase DDE domains was constructed using IQ-TREE. Evolutionary dynamics were analyzed by means of K-divergence. Results: Our study reveals PB transposons are widely distributed, highly diverse, and remarkably active across beetles. We detected PB elements in 62 of 136 examined species (45%), classifying them into six distinct clades. A total of 62 PB-containing species harbored intact copies, with most showing recent insertions (K divergence ≈ 0), indicating ongoing transpositional activity. Notably, PB elements from Harmonia axyridis, Apoderus coryli, and Diabrotica balteata exhibit exceptional potential for genetic tool development. Structurally, intact PB elements ranged from 2074 to 3465 bp, each containing a single transposase ORF (500–725 aa). All were flanked by terminal inverted repeats and generated TTAA target site duplications. Conclusions: These findings demonstrate PB transposons have not only shaped historical beetle genome evolution but continue to drive genomic diversification, underscoring their dual significance as natural genome architects and promising biotechnological tools. Full article
(This article belongs to the Section Bioinformatics)
Show Figures

Figure 1

19 pages, 1292 KB  
Review
Status Epilepsy Syndromes Made Easy: Pediatric Perspectives
by Kam Lun Ellis Hon, Alexander K. C. Leung, Karen K. Y. Leung and Alcy R. Torres
Children 2025, 12(12), 1709; https://doi.org/10.3390/children12121709 - 17 Dec 2025
Viewed by 523
Abstract
Introduction: Refractory Status Epilepsy Syndrome is a heterogeneous group of diseases associated with status epilepsy. Literature and definition have been conflicting and confusing in terms of their nomenclatures. New-onset refractory status epilepticus (NORSE) is a syndrome characterized by new onset refractory seizures [...] Read more.
Introduction: Refractory Status Epilepsy Syndrome is a heterogeneous group of diseases associated with status epilepsy. Literature and definition have been conflicting and confusing in terms of their nomenclatures. New-onset refractory status epilepticus (NORSE) is a syndrome characterized by new onset refractory seizures in a previously health child. Febrile infection-related epilepsy syndrome (FIRES) is a similar syndrome now considered a variant of NORSE and is defined as a febrile event taking place between twenty-four hours and two weeks prior to the commencement of refractory status epilepticus. An autoimmune or inflammatory etiology is often implied in both conditions because infection is rarely identified. Aim: This review provides an update on hypotheses, etiology, pathophysiology, clinical features, diagnosis, laboratory evaluation, treatment, and perspectives for NORSE/FIRES. Methods: A PubMed Clinical Queries search is performed using keywords of NORSE and FIRES, on human subjects up to May 2025. All reviews, systematic reviews, case series and case reports were included. Results: Seizures are typically recalcitrant in NORSE/FIRES. Treatments include anti-seizure medications (ASM), ketogenic diet, immunotherapy (intravenous immunoglobulin ± plasmapheresis ± corticosteroid). The prognosis is usually poor. Most children would suffer refractory epilepsy and associated cognitive impairment if they survived. Guidelines and new consensus on NORSE/FIRES terminology have aided clinicians in managing status epilepticus in a previously healthy child that occurs ± a minor febrile episode. When an autoimmune or paraneoplastic condition is subsequently identified, the condition will be named accordingly. Conclusions: NORSE and FIRES are similar conditions except that vagus nerve stimulation appears to be more efficacious in NORSE than FIRES. We propose to define these heterogeneous and confusing conditions as “NOSES” as a two-criteria syndrome: New Onset + Status Epilepticus Syndrome, lasting for over 24 h despite the use of two standard ASM. Autoimmune, paraneoplastic and infectious encephalitis are specific diagnoses of NOSES with etiology subsequently identified. Full article
(This article belongs to the Special Issue Addressing Challenges in Pediatric Critical Care Medicine)
Show Figures

Figure 1

15 pages, 1886 KB  
Systematic Review
PerClot for Use in Surgical Hemostasis: A Systemic Review and Meta-Analysis of Clinical Data
by Terri Siebert, Stephen Dierks, Piotr Maniak and Torben Colberg
Surgeries 2025, 6(4), 111; https://doi.org/10.3390/surgeries6040111 - 16 Dec 2025
Viewed by 418
Abstract
Objective: To demonstrate that PerClot’s efficacy is non-inferior to other hemostatic treatments and its safety is non-inferior to the standard of care (SoC) during surgery. Methods: Applying keywords and inclusion criteria, we queried electronic databases to conduct a systematic (e.g., Embase and Cochrane [...] Read more.
Objective: To demonstrate that PerClot’s efficacy is non-inferior to other hemostatic treatments and its safety is non-inferior to the standard of care (SoC) during surgery. Methods: Applying keywords and inclusion criteria, we queried electronic databases to conduct a systematic (e.g., Embase and Cochrane Library, etc.) and manual search (e.g., Google Scholar, etc.) for studies from 1 January 2008 (first CE marked date) to 30 March 2024. Results: Five published studies were included in this systematic review. From the included studies, 691 patients received either PerClot (n = 315) or other hemostatic agents/SoC/control (n = 376) in different surgical specialties. All five studies had comparable outcome measures, interventions, and control groups, allowing for the pooling of the study data. The primary outcomes were the achievement of hemostasis and time to hemostasis. At 7 min post-application, PerClot demonstrated non-inferior hemostasis performance as compared to Arista (absolute difference: −1.4%; 95% CI: −7.54, 4.74; p = 0.65). The time to achieve hemostasis was comparable between PerClot and other hemostatic agents (mean difference: 0.00 min; 95% CI: 0.00, 0.00; p = 1.00). No statistically significant difference in adverse event occurrence was observed between PerClot and other hemostatic agents/SoC groups (absolute difference: 0.02; 95% CI: −0.30, 0.35; p = 0.2691) and the absence of new unknown adverse events indicates the safety profile of PerClot. The results of all outcome measures are statistically insignificant. Conclusions: Our systematic review demonstrated that PerClot achieved comparable hemostasis with no new safety concerns and a statistically significant reduction in postoperative drainage volume, indicating its safety, efficacy, and performance as an alternative for hemostasis across multiple surgical specialties. Full article
Show Figures

Graphical abstract

Back to TopTop