Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (430)

Search Parameters:
Keywords = multiple queries

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 780 KB  
Systematic Review
PerClot for Use in Surgical Hemostasis: A Systemic Review and Meta-Analysis of Clinical Data
by Terri Siebert, Stephen Dierks, Piotr Maniak and Torben Colberg
Surgeries 2025, 6(4), 111; https://doi.org/10.3390/surgeries6040111 - 16 Dec 2025
Abstract
Objective: To demonstrate that PerClot’s efficacy is non-inferior to other hemostatic treatments and its safety is non-inferior to the standard of care (SoC) during surgery. Methods: Applying keywords and inclusion criteria, we queried electronic databases to conduct a systematic (e.g., Embase and Cochrane [...] Read more.
Objective: To demonstrate that PerClot’s efficacy is non-inferior to other hemostatic treatments and its safety is non-inferior to the standard of care (SoC) during surgery. Methods: Applying keywords and inclusion criteria, we queried electronic databases to conduct a systematic (e.g., Embase and Cochrane Library, etc.) and manual search (e.g., Google Scholar, etc.) for studies from 1 January 2008 (first CE marked date) to 30 March 2024. Results: Five published studies were included in this systematic review. From the included studies, 691 patients received either PerClot (n = 315) or other hemostatic agents/SoC/control (n = 376) in different surgical specialties. All five studies had comparable outcome measures, interventions, and control groups, allowing for the pooling of the study data. The primary outcomes were the achievement of hemostasis and time to hemostasis. At 7 min post-application, PerClot demonstrated non-inferior hemostasis performance as compared to Arista (absolute difference: −1.4%; 95% CI: −7.54, 4.74; p = 0.65). The time to achieve hemostasis was comparable between PerClot and other hemostatic agents (mean difference: 0.00 min; 95% CI: 0.00, 0.00; p = 1.00). No statistically significant difference in adverse event occurrence was observed between PerClot and other hemostatic agents/SoC groups (absolute difference: 0.02; 95% CI: −0.30, 0.35; p = 0.2691) and the absence of new unknown adverse events indicates the safety profile of PerClot. The results of all outcome measures are statistically insignificant. Conclusions: Our systematic review demonstrated that PerClot achieved comparable hemostasis with no new safety concerns and a statistically significant reduction in postoperative drainage volume, indicating its safety, efficacy, and performance as an alternative for hemostasis across multiple surgical specialties. Full article
21 pages, 1505 KB  
Article
WaveletHSI: Direct HSI Classification from Compressed Wavelet Coefficients via Sub-Band Feature Extraction and Fusion
by Xin Li and Baile Sun
J. Imaging 2025, 11(12), 441; https://doi.org/10.3390/jimaging11120441 - 10 Dec 2025
Viewed by 159
Abstract
A major computational bottleneck in classifying large-scale hyperspectral images (HSI) is the mandatory data decompression prior to processing. Compressed-domain computing offers a solution by enabling deep learning on partially compressed data. However, existing compressed-domain methods are predominantly tailored for the Discrete Cosine Transform [...] Read more.
A major computational bottleneck in classifying large-scale hyperspectral images (HSI) is the mandatory data decompression prior to processing. Compressed-domain computing offers a solution by enabling deep learning on partially compressed data. However, existing compressed-domain methods are predominantly tailored for the Discrete Cosine Transform (DCT) used in natural images, while HSIs are typically compressed using the Discrete Wavelet Transform (DWT). The fundamental structural mismatch between the block-based DCT and the hierarchical DWT sub-bands presents two core challenges: how to extract features from multiple wavelet sub-bands, and how to fuse these features effectively? To address these issues, we propose a novel framework that extracts and fuses features from different DWT sub-bands directly. We design a multi-branch feature extractor with sub-band feature alignment loss that processes functionally different sub-bands in parallel, preserving the independence of each frequency feature. We then employ a sub-band cross-attention mechanism that inverts the typical attention paradigm by using the sparse, high-frequency detail sub-bands as queries to adaptively select and enhance salient features from the dense, information-rich low-frequency sub-bands. This enables a targeted fusion of global context and fine-grained structural information without data reconstruction. Experiments on three benchmark datasets demonstrate that our method achieves classification accuracy comparable to state-of-the-art spatial-domain approaches while eliminating at least 56% of the decompression overhead. Full article
(This article belongs to the Special Issue Multispectral and Hyperspectral Imaging: Progress and Challenges)
Show Figures

Figure 1

19 pages, 1279 KB  
Article
Fusing a Slimming Network and Large Language Models for Intelligent Decision Support in Industrial Safety and Preventive Monitoring
by Weijun Tian, Jia Yin, Wei Wang, Zhonghua Guo, Liqiang Zhu and Jianbo Li
Electronics 2025, 14(23), 4773; https://doi.org/10.3390/electronics14234773 - 4 Dec 2025
Viewed by 221
Abstract
Intelligent personnel safety management is a critical component of smart manufacturing infrastructure. This paper presents an integrated framework combining a structurally optimized neural network (enhanced with spatial and channel feature fusion mechanisms for multi-scale detection) with an agent-based large language model (LLM) enhanced [...] Read more.
Intelligent personnel safety management is a critical component of smart manufacturing infrastructure. This paper presents an integrated framework combining a structurally optimized neural network (enhanced with spatial and channel feature fusion mechanisms for multi-scale detection) with an agent-based large language model (LLM) enhanced with retrieval-augmented generation (RAG) capabilities for factory safety monitoring. The visual detection component employs the Similarity-Aware Channel Pruning (SACP) method for automated, performance-preserving compression by identifying and suppressing redundant channels based on similarity and norm regularization, while the agent-based LLM with RAG capabilities dynamically integrates real-time violation data with established safety management protocols to generate precise diagnostic reports and operational recommendations. The optimized network achieves real-time violation detection in parallel video streams, and the LLM-powered assistant facilitates intelligent decision-making through natural language querying. Extensive evaluations on multiple benchmark datasets and a real-world safety helmet detection dataset demonstrate the scheme’s superior performance in both accuracy and practical applicability for industrial deployment. Full article
Show Figures

Figure 1

16 pages, 5273 KB  
Article
Fog Computing and Graph-Based Databases for Remote Health Monitoring in IoMT Settings
by Karrar A. Yousif, Jorge Calvillo-Arbizu and Agustín W. Lara-Romero
IoT 2025, 6(4), 76; https://doi.org/10.3390/iot6040076 - 3 Dec 2025
Viewed by 204
Abstract
Remote patient monitoring is a promising and transformative pillar of healthcare. However, deploying such systems at a scale—across thousands of patients and Internet of Medical Things (IoMT) devices—demands robust, low-latency, and scalable storage systems. This research examines the application of Fog Computing for [...] Read more.
Remote patient monitoring is a promising and transformative pillar of healthcare. However, deploying such systems at a scale—across thousands of patients and Internet of Medical Things (IoMT) devices—demands robust, low-latency, and scalable storage systems. This research examines the application of Fog Computing for remote patient monitoring in IoMT settings, where a large volume of data, low latency, and secure management of confidential healthcare information are essential. We propose a four-layer IoMT–Fog–Cloud architecture in which Fog nodes, equipped with graph-based databases (Neo4j), conduct local processing, filtering, and integration of heterogeneous health data before transmitting it to cloud servers. To assess the viability of our approach, we implemented a containerised Fog node and simulated multiple patient-device networks using a real-world dataset. System performance was evaluated using 11 scenarios with varying numbers of devices and data transmission frequencies. Performance metrics include CPU load, memory footprint, and query latency. The results demonstrate that Neo4j can efficiently ingest and query millions of health observations with an acceptable latency of less than 500 ms, even in extreme scenarios involving more than 12,000 devices transmitting data every 50 ms. The resource consumption remained well below the critical thresholds, highlighting the suitability of the proposed approach for Fog nodes. Combining Fog computing and Neo4j is a novel approach that meets the latency and real-time data ingestion requirements of IoMT environments. Therefore, it is suitable for supporting delay-sensitive monitoring programmes, where rapid detection of anomalies is critical (e.g., a prompt response to cardiac emergencies or early detection of respiratory deterioration in patients with chronic obstructive pulmonary disease), even at a large scale. Full article
(This article belongs to the Special Issue IoT-Based Assistive Technologies and Platforms for Healthcare)
Show Figures

Figure 1

17 pages, 11839 KB  
Article
Cylindrical Scan Context: A Multi-Channel Descriptor for Vertical-Structure-Aware LiDAR Localization
by Chulhee Bae, Gun Rae Cho, Jongho Bae, Sungho Park, Mangi Lee, Shin Kim and Jung Hyeun Park
Sensors 2025, 25(23), 7223; https://doi.org/10.3390/s25237223 - 26 Nov 2025
Viewed by 366
Abstract
This study introduces Cylindrical Scan Context (CSC), a novel LiDAR descriptor designed to improve robustness and efficiency in GPS-denied or degraded outdoor environments. Unlike the conventional Scan Context (SC), which relies on azimuth–range projection, CSC employs an azimuth–height representation that preserves vertical structural [...] Read more.
This study introduces Cylindrical Scan Context (CSC), a novel LiDAR descriptor designed to improve robustness and efficiency in GPS-denied or degraded outdoor environments. Unlike the conventional Scan Context (SC), which relies on azimuth–range projection, CSC employs an azimuth–height representation that preserves vertical structural information and incorporates multiple physical channels—range, point density, and reflectance intensity—to capture both geometric and radiometric characteristics of the environment. This multi-channel cylindrical formulation enhances descriptor distinctiveness and robustness against viewpoint, elevation, and trajectory variations. To validate the effectiveness of CSC, real-world experiments were conducted using both self-collected coastal–forest datasets and the public MulRan–KAIST dataset. Mapping was performed using LIO-SAM with LiDAR, IMU, and GPS measurements, after which LiDAR-only localization was evaluated independently. A total of approximately 700 query scenes (1 m ground-truth threshold) were used in the self-collected experiments, and about 1200 scenes (3 m threshold) were evaluated in the MulRan–KAIST experiments. Comparative analyses between SC and CSC were performed using Precision–Recall (PR) curves, Detection Recall (DR) curves, Root Mean Square Error (RMSE), and Top-K retrieval accuracy. The results show that CSC consistently yields lower RMSE—particularly in the vertical and lateral directions—and demonstrates faster recall growth and higher stability in global retrieval. Across datasets, CSC maintains superior DR performance in high-confidence regions and achieves up to 45% reduction in distance RMSE in large-scale campus environments. These findings confirm that the cylindrical multi-channel formulation of CSC significantly improves geometric consistency and localization reliability, offering a practical and robust LiDAR-only localization framework for challenging unstructured outdoor environments. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

15 pages, 2311 KB  
Article
A New Gilliam Genotypic Variant of Orientia tsutsugamushi in Human Scrub Typhus Cases from South India
by Steny Vallomkottu Joseph, Krishnamoorthy Nallan, Gopinathan Rajan, Amudhan Murugesan, Renu Govindarajan, Raju Sivadoss, Ramkumar Ramalingam, Rajarathinam Kannan Madhumitha, Sucila Thangam Ganesan, Suria Kumar Jayakumar, Manju Rahi and Paramasivan Rajaiah
Microorganisms 2025, 13(12), 2670; https://doi.org/10.3390/microorganisms13122670 - 24 Nov 2025
Viewed by 333
Abstract
Scrub typhus, caused by Orientia tsutsugamushi (Ot), is a re-emerging public health concern across Southeast Asia. Although multiple Ot strains have been identified in endemic regions, their genetic characterization in India remains limited. We analyzed Ot strains from humans by targeting the GroEL [...] Read more.
Scrub typhus, caused by Orientia tsutsugamushi (Ot), is a re-emerging public health concern across Southeast Asia. Although multiple Ot strains have been identified in endemic regions, their genetic characterization in India remains limited. We analyzed Ot strains from humans by targeting the GroEL and 56-kDa TSA genes. A total of 105 serum samples were subjected to PCR amplification and phylogenetic analysis for the GroEL gene, of which 33 (31.4%) were positive. Phylogenetic reconstruction revealed four major clades: Karp, Kato, Ot-TJTN (novel Ot-Thanjavur-Tamil Nadu), and the Gilliam group. Among the 33 PCR positives, 11 sequences clustered into a distinct monophyletic clade within the Gilliam group but diverged significantly from known classical Gilliam strains. The overall mean nucleotide diversity (π) was 0.02 (2%), while the divergence between these 11 sequences and the Gilliam strain was 0.039 (3.9%). The observed divergence indicates that these sequences represent the first identified Indian Gilliam variant (IG-v), showing marked genetic distinction from classical Gilliam and other related strains. Further analysis of the 56-kDa gene from the 11 IG-v samples revealed phylogenetic incongruence between the GroEL and 56-kDa genes, indicating antigenic reassortment involving three clades: Karp-like (n = 7), Ot-TJTN-like (n = 3), and Gilliam (n = 1). Similarity plot and recombination analyses, using 56-kDa Ot-TJTN and Karp-like clades as queries, against Ot reference strains revealed preliminary evidence of genetic exchange. These findings highlight the possible role of recombination and antigenic shift in driving the evolutionary dynamics and genetic diversity of Ot in this region. Notably, the identification of an IG-v marks a significant advancement in our understanding of the circulating Ot strains. This finding holds important implications for refining molecular diagnostics, enhancing serological assays, and developing broadly protective vaccines targeting region-specific variants. Full article
(This article belongs to the Special Issue The Molecular Epidemiology of Infectious Diseases)
Show Figures

Figure 1

22 pages, 1143 KB  
Article
Comparative Analysis of SQL Injection Defense Mechanisms Based on Three Approaches: PDO, PVT, and ART
by Jiho Choi, Young-Ae Jung and Hoon Ko
Appl. Sci. 2025, 15(23), 12351; https://doi.org/10.3390/app152312351 - 21 Nov 2025
Viewed by 643
Abstract
This study presents a comprehensive examination of the risks associated with SQL Injection attacks, with a particular focus on the Union Select technique. This method is frequently exploited by attackers to retrieve unauthorized data by appending malicious queries to legitimate database calls. We [...] Read more.
This study presents a comprehensive examination of the risks associated with SQL Injection attacks, with a particular focus on the Union Select technique. This method is frequently exploited by attackers to retrieve unauthorized data by appending malicious queries to legitimate database calls. We analyzed multiple real-world cases where personal information was leaked through such attacks, underscoring the urgent need for robust countermeasures in modern web applications. To address these threats, we developed and implemented a multi-layered defense strategy. This strategy includes using PHP Data Objects (PDO) with Prepared Statements to safely handling user inputs, rigorous input pattern validation to detect and reject suspicious payloads, and a redirection-based filtering mechanism to disrupt abnormal access attempts. Through controlled experiments, we validated the effectiveness of these techniques in mitigating SQL Injection attacks. The results demonstrate that our approach successfully blocked malicious queries and prevented unauthorized data access or manipulation. These findings represent a significant contribution to enhancing the security, stability, and trustworthiness of web-based systems, especially those handling sensitive user information. Finally, this work is presented as an educational comparative study, not as a proposal of new defense mechanisms, aiming to provide a clear and reproducible evaluation of standard SQL injection countermeasures. The contributions of this work are threefold: (i) it provides a unified comparative evaluation of three representative SQL injection defense methods—PDO, pattern validation, and attacker redirection—under identical experimental conditions; (ii) it analyzes their strengths, weaknesses, and practical applicability in PHP–MySQL environments; and (iii) it serves as an educational reference that bridges theoretical understanding and practical implementation. The study also suggests directions for extending this work through machine-learning-based anomaly detection and runtime self-protection (RASP) frameworks. Full article
Show Figures

Figure 1

28 pages, 5539 KB  
Article
Design of a Blockchain-Enabled Traceability System for Pleurotus ostreatus Supply Chains
by Hongyan Guo, Wei Xu, Mingxia Lin, Xingguo Zhang and Pingzeng Liu
Foods 2025, 14(22), 3959; https://doi.org/10.3390/foods14223959 - 19 Nov 2025
Viewed by 527
Abstract
Pleurotus ostreatus is valued for its nutritional, medicinal, economic, and ecological benefits and is widely used in the food, pharmaceutical, and environmental protection industries. Pleurotus ostreatus, as a highly perishable edible fungus, faces significant challenges in supply chain quality control and food [...] Read more.
Pleurotus ostreatus is valued for its nutritional, medicinal, economic, and ecological benefits and is widely used in the food, pharmaceutical, and environmental protection industries. Pleurotus ostreatus, as a highly perishable edible fungus, faces significant challenges in supply chain quality control and food safety due to its short shelf life. As consumer demand for food freshness and full traceability increases, there is an urgent need to establish a reliable traceability system that enables real-time monitoring, spoilage prevention, and quality assurance. This study focuses on the Pleurotus ostreatus supply chain and designs and implements a multi-role flexible traceability system that integrates blockchain and the Internet of Things. The system collects key production and storage environment parameters in real time through sensor networks and enhances data accuracy and robustness using an improved adaptive weighted fusion algorithm, enabling precise monitoring of the growth environment and quality risks. The system adopts a “link-chain” mapping mechanism for multi-chain storage and dynamic reorganization of business processes. It incorporates attribute-based encryption strategies and smart contracts to support tiered data access and secure sharing among multiple parties. Key information is stored on the blockchain to prevent tampering, while auxiliary data is stored in off-chain databases and the Interplanetary File System to ensure efficient and verifiable data queries. Deployed at Shandong Qihe Ecological Agriculture Co., Ltd., No. 517, Xilou Village, Kunlun Town, Zichuan District, 255000, Zibo City, Shandong Province, China, the system covers 12 cultivation units and 60 sensor nodes, recording over 50,000 traceable data points. Experimental results demonstrate that the system outperforms baseline methods in query latency, data consistency, and environmental monitoring accuracy. The improved fusion algorithm reduced the total variance of environmental data by 20%. In practical application, the system reduced the spoilage rate of Pleurotus ostreatus by approximately 12.3% and increased the quality inspection pass rate by approximately 15.4%, significantly enhancing the supply chain’s quality control and food safety capabilities. The results show that the framework is feasible and scalable in terms of information credibility and operational efficiency and significantly improves food quality and safety monitoring throughout the production, storage, and distribution of Pleurotus ostreatus. This study provides a viable technological path for spoilage prevention, quality tracking, and digital food safety supervision, offering valuable insights for both food science research and practical applications. Full article
(This article belongs to the Section Food Security and Sustainability)
Show Figures

Figure 1

12 pages, 439 KB  
Article
Advancing Conversational Text-to-SQL: Context Strategies and Model Integration with Large Language Models
by Benjamin G. Ascoli and Jinho D. Choi
Future Internet 2025, 17(11), 527; https://doi.org/10.3390/fi17110527 - 18 Nov 2025
Viewed by 566
Abstract
Conversational text-to-SQL extends the traditional single-turn SQL generation paradigm to multi-turn, dialogue-based scenarios, enabling users to pose and refine database queries interactively, and requiring models to track dialogue context over multiple user queries and system responses. Despite extensive progress in single-turn benchmarks such [...] Read more.
Conversational text-to-SQL extends the traditional single-turn SQL generation paradigm to multi-turn, dialogue-based scenarios, enabling users to pose and refine database queries interactively, and requiring models to track dialogue context over multiple user queries and system responses. Despite extensive progress in single-turn benchmarks such as Spider and BIRD, and the recent rise of large language models, conversational datasets continue to pose challenges. In this paper, we spotlight model merging as a key strategy for boosting ESM performance on CoSQL and SParC. We present a new state-of-the-art system on the CoSQL benchmark, achieved by fine-tuning CodeS-7b under two paradigms for handling conversational history: (1) full history concatenation, and (2) question rewriting via GPT-based summarization. While each paradigm alone obtains competitive results, we observe that averaging the weights of these fine-tuned models can outperform both individual variants. Our findings highlight the promise of LLM-driven multi-turn SQL generation, offering a lightweight yet powerful avenue for improving conversational text-to-SQL. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) and Natural Language Processing (NLP))
Show Figures

Figure 1

28 pages, 12813 KB  
Article
Training-Free Few-Shot Image Classification via Kernel Density Estimation with CLIP Embeddings
by Marcos Sergio Pacheco dos Santos Lima Junior, Juan Miguel Ortiz-de-Lazcano-Lobato and Ezequiel López-Rubio
Mathematics 2025, 13(22), 3615; https://doi.org/10.3390/math13223615 - 11 Nov 2025
Viewed by 660
Abstract
Few-shot image classification aims to recognize novel classes from only a handful of labeled examples, a challenge in domains where data collection is costly or impractical. Existing solutions often rely on meta learning, fine tuning, or data augmentation, introducing computational overhead, risk of [...] Read more.
Few-shot image classification aims to recognize novel classes from only a handful of labeled examples, a challenge in domains where data collection is costly or impractical. Existing solutions often rely on meta learning, fine tuning, or data augmentation, introducing computational overhead, risk of overfitting, or are not highly efficient. This paper introduces ProbaCLIP, a simple training-free approach that leverages Kernel Density Estimation (KDE) within the embedding space of Contrastive Language-Image Pre-training (CLIP). Unlike other CLIP-based methods, the proposed approach operates solely on visual embeddings and does not require text labels. Class-conditional probability densities were estimated from few-shot support examples, and queries were classified by likelihood evaluation, where Principal Component Analysis (PCA) was used for dimensionality reduction, compressing the dissimilarities between classes on each episode. We further introduced an optional bandwidth optimization strategy and a consensus decision mechanism through cross-validation, while addressing the special case of one-shot classification with distance-based measures. Extensive experiments on multiple datasets demonstrated that our method achieved competitive or superior accuracy compared to the state-of-the-art few-shot classifiers, reaching up to 98.37% accuracy in five-shot tasks and up to 99.80% in a 16-shot framework with ViT-L/14@336px. We proved our methodology by achieving high performance without gradient-based training, text supervision, or auxiliary meta-training datasets, emphasizing the effectiveness of combining pre-trained embeddings with statistical density estimation for data-scarce classification. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

26 pages, 3024 KB  
Article
GraderAssist: A Graph-Based Multi-LLM Framework for Transparent and Reproducible Automated Evaluation
by Catalin Anghel, Andreea Alexandra Anghel, Emilia Pecheanu, Adina Cocu, Marian Viorel Craciun, Paul Iacobescu, Antonio Stefan Balau and Constantin Adrian Andrei
Informatics 2025, 12(4), 123; https://doi.org/10.3390/informatics12040123 - 9 Nov 2025
Viewed by 1061
Abstract
Background and objectives: Automated evaluation of open-ended responses remains a persistent challenge, particularly when consistency, transparency, and reproducibility are required. While large language models (LLMs) have shown promise in rubric-based evaluation, their reliability across multiple evaluators is still uncertain. Variability in scoring, feedback, [...] Read more.
Background and objectives: Automated evaluation of open-ended responses remains a persistent challenge, particularly when consistency, transparency, and reproducibility are required. While large language models (LLMs) have shown promise in rubric-based evaluation, their reliability across multiple evaluators is still uncertain. Variability in scoring, feedback, and rubric adherence raises concerns about interpretability and system robustness. This study introduces GraderAssist, a graph-based, rubric-guided, multi-LLM framework designed to ensure transparent and reproducible automated evaluation. Methods: GraderAssist evaluates a dataset of 220 responses to both technical and argumentative questions, collected from undergraduate computer science courses. Six open-source LLMs and GPT-4 (as expert reference) independently scored each response using two predefined rubrics. All outputs—including scores, feedback, and metadata—were parsed, validated, and stored in a Neo4j graph database, enabling structured querying, traceability, and longitudinal analysis. Results: Cross-model analysis revealed systematic differences in scoring behavior and feedback generation. Some models produced more generous evaluations, while others aligned closely with GPT-4. Semantic analysis using Sentence-BERT embeddings highlighted distinctive feedback styles and variable rubric adherence. Inter-model agreement was stronger for technical criteria but diverged substantially for argumentative tasks. Originality: GraderAssist integrates rubric-guided evaluation, multi-model comparison, and graph-based storage into a unified pipeline. By emphasizing reproducibility, transparency, and fine-grained analysis of evaluator behavior, it advances the design of interpretable automated evaluation systems with applications in education and beyond. Full article
Show Figures

Figure 1

22 pages, 38803 KB  
Article
VG-SAM: Visual In-Context Guided SAM for Universal Medical Image Segmentation
by Gang Dai, Qingfeng Wang, Yutao Qin, Gang Wei and Shuangping Huang
Fractal Fract. 2025, 9(11), 722; https://doi.org/10.3390/fractalfract9110722 - 8 Nov 2025
Viewed by 1039
Abstract
Medical image segmentation, driven by the intrinsic fractal characteristics of biological patterns, plays a crucial role in medical image analysis. Recently, universal image segmentation, which aims to build models that generalize robustly to unseen anatomical structures and imaging modalities, has emerged as a [...] Read more.
Medical image segmentation, driven by the intrinsic fractal characteristics of biological patterns, plays a crucial role in medical image analysis. Recently, universal image segmentation, which aims to build models that generalize robustly to unseen anatomical structures and imaging modalities, has emerged as a promising research direction. To achieve this, previous solutions typically follow the in-context learning (ICL) framework, leveraging segmentation priors from a few labeled in-context references to improve prediction performance on out-of-distribution samples. However, these ICL-based methods often overlook the quality of the in-context set and struggle with capturing intricate anatomical details, thus limiting their segmentation accuracy. To address these issues, we propose VG-SAM, which employs a multi-scale in-context retrieval phase and a visual in-context guided segmentation phase. Specifically, inspired by the hierarchical and self-similar properties in fractal structures, we introduce a multi-level feature similarity strategy to select in-context samples that closely match the query image, thereby ensuring the quality of the in-context samples. In the segmentation phase, we propose to generate multi-granularity visual prompts based on the high-quality priors from the selected in-context set. Following this, these visual prompts, along with the semantic guidance signal derived from the in-context set, are seamlessly integrated into an adaptive fusion module, which effectively guides the Segment Anything Model (SAM) with powerful segmentation capabilities to achieve accurate predictions on out-of-distribution query images. Extensive experiments across multiple datasets demonstrate the effectiveness and superiority of our VG-SAM over the state-of-the-art (SOTA) methods. Notably, under the challenging one-shot reference setting, our VG-SAM surpasses SOTA methods by an average of 6.61% in DSC across all datasets. Full article
Show Figures

Figure 1

26 pages, 5753 KB  
Article
An Optimized Few-Shot Learning Framework for Fault Diagnosis in Milling Machines
by Faisal Saleem, Muhammad Umar and Jong-Myon Kim
Machines 2025, 13(11), 1010; https://doi.org/10.3390/machines13111010 - 2 Nov 2025
Viewed by 687
Abstract
Reliable fault diagnosis of milling machines is essential for maintaining operational stability and cost-effective maintenance; however, it remains challenging due to limited labeled data and the highly non-stationary nature of acoustic emission (AE) signals. This study introduces an optimized Few-Shot Learning framework (FSL) [...] Read more.
Reliable fault diagnosis of milling machines is essential for maintaining operational stability and cost-effective maintenance; however, it remains challenging due to limited labeled data and the highly non-stationary nature of acoustic emission (AE) signals. This study introduces an optimized Few-Shot Learning framework (FSL) that integrates time–frequency analysis with attention-guided representation learning and distribution-aware classification for data-efficient fault detection. The framework converts AE signals into Continuous Wavelet Transform (CWT) scalograms, which are processed using a self-attention-enhanced ResNet-50 backbone to capture both local texture features and long-range dependencies in the signal. Adaptive prototype computation with learnable importance weighting refines class representations, while Mahalanobis distance-based matching ensures robust alignment between query and prototype embeddings under limited sample conditions. To further strengthen discriminability, contrastive loss with hard negative mining enforces compact intra-class clustering and clear inter-class separation. Comprehensive experiments under 7-way 5-shot settings and 5-fold stratified cross-validation demonstrate consistent and reliable performance, achieving a mean accuracy of 98.86% ± 0.97% (95% CI: [98.01%, 99.71%]). Additional evaluations across multiple spindle speeds (660 rpm and 1440 rpm) confirm that the model generalizes effectively under varying operating conditions. Grad-CAM++ activation maps further illustrate that the network focuses on physically meaningful fault-related regions, enhancing interpretability. The results verify that the proposed framework achieves robust, scalable, and interpretable fault diagnosis using minimal labeled data, offering a practical solution for predictive maintenance in modern intelligent manufacturing environments. Full article
Show Figures

Figure 1

26 pages, 3558 KB  
Article
Avocado: An Interpretable Fine-Grained Intrusion Detection Model for Advanced Industrial Control Network Attacks
by Xin Liu, Tao Liu and Ning Hu
Electronics 2025, 14(21), 4233; https://doi.org/10.3390/electronics14214233 - 29 Oct 2025
Viewed by 408
Abstract
Industrial control systems (ICS), as critical infrastructure supporting national operations, are increasingly threatened by sophisticated stealthy network attacks. These attacks often break malicious behaviors into multiple highly camouflaged packets, which are embedded into large-scale background traffic with low frequency, making them semantically and [...] Read more.
Industrial control systems (ICS), as critical infrastructure supporting national operations, are increasingly threatened by sophisticated stealthy network attacks. These attacks often break malicious behaviors into multiple highly camouflaged packets, which are embedded into large-scale background traffic with low frequency, making them semantically and temporally indistinguishable from normal traffic and thus evading traditional detection. Existing methods largely rely on flow-level statistics or long-sequence modeling, resulting in coarse detection granularity, high latency, and poor byte-level interpretability, falling short of industrial demands for real-time and actionable detection. To address these challenges, we propose Avocado, a fine-grained, multi-level intrusion detection model. Avocado’s core innovation lies in contextual flow-feature fusion: it models each packet jointly with its surrounding packet sequence, enabling independent abnormality detection and precise localization. Moreover, a shared-query multi-head self-attention mechanism is designed to quantify byte-level importance within packets. Experimental results show that Avocado significantly outperforms state-of-the-art flow-level methods on NGAS and CLIA-M221 datasets, improving packet-level detection ACC by 1.55% on average, and reducing FPR and FNR to 3.2%, 3.6% (NGAS), and 3.7%, 4.3% (CLIA-M221), respectively, demonstrating its superior performance in both detection and interpretability. Full article
(This article belongs to the Special Issue Novel Approaches for Deep Learning in Cybersecurity)
Show Figures

Figure 1

18 pages, 1707 KB  
Article
DefAn: Definitive Answer Dataset for LLM Hallucination Evaluation
by A. B. M. Ashikur Rahman, Saeed Anwar, Muhammad Usman, Irfan Ahmad and Ajmal Mian
Information 2025, 16(11), 937; https://doi.org/10.3390/info16110937 - 28 Oct 2025
Viewed by 2660
Abstract
Large Language Models (LLMs) represent a major step in AI development and are increasingly used in daily applications. However, they are prone to hallucinations, generating claims that contradict established facts, deviating from prompts, and producing inconsistent responses when the same prompt is presented [...] Read more.
Large Language Models (LLMs) represent a major step in AI development and are increasingly used in daily applications. However, they are prone to hallucinations, generating claims that contradict established facts, deviating from prompts, and producing inconsistent responses when the same prompt is presented multiple times. Addressing these issues is challenging due to the lack of comprehensive and easily assessable benchmark datasets. Most existing datasets are limited in scale and scope and rely on multiple-choice questions, which are insufficient for evaluating the generative capabilities of LLMs. To assess hallucination in LLMs, this paper introduces a comprehensive benchmark dataset consisting of over 20,000 unique prompts (more than 75,000 prompts in total) across eight domains. These prompts are designed to elicit definitive, concise, and informative answers. The dataset is divided into two segments: one publicly available for testing and assessing LLM performance, and a hidden segment for benchmarking various LLMs. In our experiments, we tested nine State-of-The-Art (SoTA) models, GPT-4o, GPT-3.5, LLama 2 7B, LLama 3 8B, Gemini 1.0 Pro, Mixtral 8x7B, Zephyr 7B, Deepseek-r1-7b, and Qwen2.5-14B, revealing that overall factual hallucination ranges from 48% to 82% on the public dataset and 31% to 76% on the hidden benchmark. Prompt Misalignment Hallucination ranges up to 95% in the public dataset and up to 94% in the hidden counterpart. Average consistency ranges from 21% to 61% and 44% to 63%, respectively. Domain-wise analysis reveals that LLM performance significantly deteriorates when asked for specific numeric information, whereas it performs moderately with queries involving persons, locations, and dates. Our dataset demonstrates its efficacy and serves as a comprehensive benchmark for evaluating LLM performance. Full article
Show Figures

Graphical abstract

Back to TopTop