Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (396)

Search Parameters:
Keywords = intelligent check

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 2717 KB  
Article
Research on Dynamic Characteristics and Parameter Optimization of Hydro-Pneumatic Suspension of Mine Wide-Body Dump Truck
by Chuanxu Wan, Lu Xiao, Guolei Chen, Qingwei Kang, Peng Zhou, Gang Zhou and Guocong Lin
Processes 2026, 14(8), 1215; https://doi.org/10.3390/pr14081215 - 10 Apr 2026
Abstract
Wide-body dump trucks in open-pit mines frequently operate under high loads and severe road conditions, demanding superior dynamic performance from their suspension systems. Existing studies tend to focus only on the influence of individual parameters on the dynamic characteristics of hydro-pneumatic suspensions, lacking [...] Read more.
Wide-body dump trucks in open-pit mines frequently operate under high loads and severe road conditions, demanding superior dynamic performance from their suspension systems. Existing studies tend to focus only on the influence of individual parameters on the dynamic characteristics of hydro-pneumatic suspensions, lacking systematic analysis of parameter coupling effects and optimal parameter combinations. Taking the two-stage pressure hydro-pneumatic suspension of a wide-body dump truck as the research object, this paper theoretically analyzes its working characteristics and establishes an AMESim model under multiple excitation conditions to reveal how parameter interactions affect the dynamic performance of the suspension. With peak liquid pressure, maximum liquid pressure fluctuation, and maximum vehicle body vertical acceleration as optimization objectives, a multi-objective optimization algorithm is employed to determine the optimal suspension parameters. The results indicate that the interactive responses of damping orifice diameter and check valve diameter with respect to peak pressure and body vertical acceleration exhibit strong nonlinearity. Compared with the original parameter scheme, the optimized design reduces peak liquid pressure, maximum pressure fluctuation, and peak body vertical acceleration by 8.76%, 29.1%, and 11.7%, respectively, significantly improving vehicle ride comfort and mitigating pressure oscillations in the hydro-pneumatic suspension. The research results can provide theoretical support and engineering reference for intelligent operation and maintenance of mine heavy equipment, optimization design of suspension systems and efficient and reliable operation. Full article
(This article belongs to the Section Manufacturing Processes and Systems)
25 pages, 2957 KB  
Article
Automating the Detection of Evasive Windows Malware: An Evaluated YARA Rule Library for Anti-VM and Anti-Sandbox Techniques
by Sebastien Kanj, Gorka Vila and Josep Pegueroles
J. Cybersecur. Priv. 2026, 6(2), 69; https://doi.org/10.3390/jcp6020069 - 8 Apr 2026
Abstract
Anti-analysis techniques, also known as evasive techniques, enable Windows malware to detect and evade dynamic inspection environments, undermining the effectiveness of virtual-machine and sandbox-based inspection. Despite extensive prior research, no unified classification has been paired with a large-scale empirical evaluation of static detection [...] Read more.
Anti-analysis techniques, also known as evasive techniques, enable Windows malware to detect and evade dynamic inspection environments, undermining the effectiveness of virtual-machine and sandbox-based inspection. Despite extensive prior research, no unified classification has been paired with a large-scale empirical evaluation of static detection capabilities for these behaviors. This paper addresses this gap by presenting a comprehensive classification and detection framework. We consolidate 94 anti-analysis techniques from academic, community, and threat-intelligence sources into nine mechanistic categories and derive corresponding YARA rules for static identification. In total, 82 YARA signatures were authored or refined and evaluated on 459,508 malware and 92,508 goodware samples. After iterative refinement using precision thresholds, 42 rules achieved high accuracy (≥75%), 16 showed moderate precision (50–75%), and 24 were discarded due to unreliability. The results indicate strong static detectability for firmware- and BIOS-based checks, but limited precision for timing-based evasions, which frequently overlap with benign behavior. Although YARA provides broad coverage of observable artifacts, its static nature limits detection under obfuscation or runtime mutation; our measurements therefore represent conservative estimates of technique prevalence. All validated rules are released in an open-source repository to support reproducibility, improve incident-response workflows, and strengthen prevention and mitigation against real-world threats. Future work will explore hybrid validation, container-evasion extensions, and forensic attribution based on signature co-occurrence patterns. Full article
(This article belongs to the Special Issue Intrusion/Malware Detection and Prevention in Networks—2nd Edition)
Show Figures

Figure 1

19 pages, 479 KB  
Article
Educating for Complexity: A Learning Architecture for Systems Thinking in Professional Education and Generative AI Governance
by Liliana Pedraja-Rejas, Katherine Acosta-García, Emilio Rodríguez-Ponce and Camila Muñoz-Fritis
Systems 2026, 14(4), 403; https://doi.org/10.3390/systems14040403 - 7 Apr 2026
Viewed by 98
Abstract
Professional education increasingly requires graduates to make decisions in complex systems marked by multiple stakeholders, feedback, delays, uncertainty, and unintended consequences, yet systems thinking is still often taught as a set of disconnected tools rather than as an integrated professional practice. This conceptual [...] Read more.
Professional education increasingly requires graduates to make decisions in complex systems marked by multiple stakeholders, feedback, delays, uncertainty, and unintended consequences, yet systems thinking is still often taught as a set of disconnected tools rather than as an integrated professional practice. This conceptual paper adopts an integrative theory-building approach to develop a unified architecture for systems thinking in professional education, drawing purposively on systems traditions, practice-based learning, assessment scholarship, and emerging work on generative artificial intelligence (GenAI). The paper proposes four iterative practices (sensemaking and boundary setting, co-modelling and causal representation, intervention reasoning, and meta-learning) as the core architecture for learning systems thinking in professional contexts. It further translates this architecture into indicative implications for curriculum sequencing, authentic tasks, and assessment, while positioning GenAI as a cross-cutting support/risk layer that can assist iteration and critique but also introduce predictable risks such as fabricated causal links, overreliance, and false mastery. To address these risks, the paper outlines governance conditions based on traceability, uncertainty checks, stakeholder validation, and process-based assessment. Overall, the framework provides a design-oriented basis for teaching, assessing, and governing systems thinking in contemporary professional education and a foundation for future empirical testing. Full article
(This article belongs to the Special Issue Systems Thinking in Education: Learning, Design and Technology)
Show Figures

Figure 1

26 pages, 1113 KB  
Article
Unlocking Green Growth: How Artificial Intelligence Policies Enhance Green Economic Efficiency—Evidence from China
by Shangqing Jiang, Da Gao and Xinyu Zhang
Sustainability 2026, 18(7), 3581; https://doi.org/10.3390/su18073581 - 6 Apr 2026
Viewed by 246
Abstract
With growing environmental pressure and tightening resource constraints, artificial intelligence has become a key technical path for urban low-carbon transformation. This study aims to empirically examine whether and how AI-oriented pilot policies affect green economic efficiency (GEE) and identify its underlying mechanisms and [...] Read more.
With growing environmental pressure and tightening resource constraints, artificial intelligence has become a key technical path for urban low-carbon transformation. This study aims to empirically examine whether and how AI-oriented pilot policies affect green economic efficiency (GEE) and identify its underlying mechanisms and boundary conditions. Taking China’s National New-Generation Artificial Intelligence Innovation Development Pilot Zone (NAIDPZ) as a quasi-natural experiment, we use a staggered difference-in-differences model to test the policy effect based on panel data of 267 Chinese prefecture-level cities from 2007 to 2023, with a series of robustness checks to ensure the reliability of the conclusion. We find that the NAIDPZ policy significantly improves urban GEE, with a stronger effect in inland, central, and non-resource-based cities. The composite NAIDPZ policy effect is associated with higher GEE, mainly through green technological innovation and industrial structure optimisation, while its impact is positively moderated by government attention and public environmental attention. These conclusions provide empirical reference for global governments to optimise artificial intelligence policies for low-carbon development. Full article
Show Figures

Figure 1

33 pages, 3024 KB  
Article
Design and Implementation of a Sustainable Engineering Education Model Based on the Integration of Lean Management Within Outcome-Based Engineering Education (OBEE): A Performance-Driven Approach
by Fatima-Ezzahra Afif and Fatima Bouyahia
Sustainability 2026, 18(7), 3515; https://doi.org/10.3390/su18073515 - 3 Apr 2026
Viewed by 142
Abstract
Outcome-Based Engineering Education (OBEE), a performance-driven approach at the forefront of curriculum design, offers a reliable and scalable framework for reforming engineering education. This research examines the industrial and logistics engineering major at the National School of Applied Sciences of Marrakesh as a [...] Read more.
Outcome-Based Engineering Education (OBEE), a performance-driven approach at the forefront of curriculum design, offers a reliable and scalable framework for reforming engineering education. This research examines the industrial and logistics engineering major at the National School of Applied Sciences of Marrakesh as a case study to develop and implement a new hybrid model that merges the OBEE approach and Lean Management principles and methods through five layers. This paper presents the second and third layers of the Lean-OBEE architecture: the Target layer and Assessment layer, respectively. The target layer employs Hoshin Kanri’s X-Matrix in the OBEE process as a Lean strategic planning tool for visual and efficient management of the educational outcomes. Teachers and academic staff used the X-Matrix to monitor the unfolding of strategic educational objectives and progress throughout the course and curriculum. The assessment layer integrates a set of Lean principles, including PDCA (Plan-Do-Check-Act) cycles, Poka-Yoke, Flow, Muri, Standard Work, Takt Time, and Collective Intelligence, to design and assess the course session. The findings of this study provide preliminary evidence that the proposed Lean-OBEE model supports the development of sustainable engineering education by continuously improving the relevance and efficiency of the curriculum and teaching practices to meet the dynamic needs of industry and all stakeholders. This study serves as a practical reference for achieving the stated outcomes. Full article
Show Figures

Figure 1

31 pages, 1954 KB  
Article
HASCom: A Heterogeneous Affective-Semantic Communication Framework for Speech Transmission
by Zhenjia Yu, Taojie Zhu, Md Arman Hossain, Zineb Zbarna and Lei Wang
Sensors 2026, 26(7), 2158; https://doi.org/10.3390/s26072158 - 31 Mar 2026
Viewed by 444
Abstract
Driven by the development of next-generation wireless networks and the widespread adoption of sensing, communication is shifting from traditional bit-level transmission to intelligent, rich interactions within our digital social system. However, existing speech semantic communication frameworks predominantly focus on textual accuracy, neglecting the [...] Read more.
Driven by the development of next-generation wireless networks and the widespread adoption of sensing, communication is shifting from traditional bit-level transmission to intelligent, rich interactions within our digital social system. However, existing speech semantic communication frameworks predominantly focus on textual accuracy, neglecting the critical affective information (e.g., tone and emotion) that is essential for natural human-centric interactions in the real world. To address this limitation, we propose the Heterogeneous Affective Speech Semantic Communication (HASCom) framework, designed for the robust transmission of highly expressive speech over complex wireless channels. Specifically, we design a heterogeneous dual-stream transmission architecture that decouples discrete phoneme-level linguistic content from continuous emotional embeddings. For discrete semantic information, we use reliable digital coding protected by Low-Density Parity-Check (LDPC) to guarantee strict recoverability. Conversely, for emotional features, we employ Deep Joint Source-Channel Coding (JSCC) analog transmission to prevent irreversible quantization errors and the cliff effect. Additionally, we develop a prior-guided diffusion reconstruction module at the receiving end. This module leverages a structural prior network to align the decoded semantics, which then steers the reverse diffusion process conditioned on the recovered affective features. Extensive experiments under both AWGN and Rayleigh fading channels demonstrate that HASCom significantly outperforms state-of-the-art baselines. Specifically, it achieves superior objective semantic similarity and subjective Mean Opinion Score (MOS) at low Signal-to-Noise Ratios (SNRs), while the JSCC transmission modules maintain an ultra-low inference latency of less than 0.1 ms, validating its high efficiency and robustness for practical deployments. Full article
Show Figures

Figure 1

18 pages, 1570 KB  
Article
A Study on Broker-Assisted Blockchain Trust Chains for Provenance and Integrity Verification of Generative Media Using Watermarking, Semantic Fingerprinting, and C2PA
by Chaelin Yang and Minchul Kim
Appl. Sci. 2026, 16(7), 3391; https://doi.org/10.3390/app16073391 - 31 Mar 2026
Viewed by 219
Abstract
The widespread availability of generative artificial intelligence has increased the volume of images and videos shared online, while making it difficult to verify origin and integrity after routine post-processing such as re-encoding, resizing, and transcoding. This research proposes a broker-assisted trust chain architecture [...] Read more.
The widespread availability of generative artificial intelligence has increased the volume of images and videos shared online, while making it difficult to verify origin and integrity after routine post-processing such as re-encoding, resizing, and transcoding. This research proposes a broker-assisted trust chain architecture that treats authenticity verification as an evidence registration and validation workflow rather than a single-signal decision. A trust chain broker seals submitted media by embedding a robust hidden watermark, deriving an embedding-based semantic fingerprint, and producing standardized provenance metadata, then stores the sealed media off-chain using content-addressed storage and anchors only compact evidence on an immutable ledger. The anchored evidence binds the content identifier of the sealed artifact with semantic and provenance hashes, timestamps, and the broker signature, while scalable candidate discovery is supported through an off-chain Facebook AI Similarity Search (FAISS)-based nearest-neighbor similarity index. We evaluate the retrieval stage on a COCO 2017 validation subset (N = 200) under representative post-processing transformations (JPEG compression, resizing, and center cropping), and observe near-perfect candidate identification performance with Recall@1 = 0.9988 and Recall@5/10 = 1.000. During verification, the broker retrieves candidates by embedding similarity, validates ledger inclusion and broker signatures, applies consistency checks across evidence fields, and issues an operational verdict with a signed verification report that is independently checkable. We also implement an EVM-based proof-of-concept for on-chain anchoring and report low ledger-side overhead for a representative registration transaction (gasUsed = 25,380) when recording fixed-size compact evidence fields. The proposed architecture does not prevent copying itself, but improves traceability and auditability under realistic transformation and redistribution conditions by combining watermarking, semantic association, provenance binding, and tamper-evident evidence anchoring within a clear service accountability boundary. Full article
(This article belongs to the Special Issue Advanced Blockchain Technologies and Their Applications)
Show Figures

Figure 1

31 pages, 3515 KB  
Article
Improving Deep Learning Based Lung Nodule Classification Through Optimized Adaptive Intensity Correction
by Saba Khan, Muhammad Nouman Noor, Haya Mesfer Alshahrani, Wided Bouchelligua and Imran Ashraf
Bioengineering 2026, 13(4), 396; https://doi.org/10.3390/bioengineering13040396 - 29 Mar 2026
Viewed by 359
Abstract
Lung cancer is one of the most common causes of death from cancer around the world, and catching it early through computed tomography (CT) scans can drastically improve survival. However, automated classification of pulmonary nodule candidates is hard because images do not all [...] Read more.
Lung cancer is one of the most common causes of death from cancer around the world, and catching it early through computed tomography (CT) scans can drastically improve survival. However, automated classification of pulmonary nodule candidates is hard because images do not all have the same intensity across scanners and protocols, resulting in inconsistent performance, more false positives (FP), and a ceiling on how much deep learning models work in an average clinic. In this work, we tackle this by introducing a preprocessing step that corrects intensity differences before feeding images into classification models. We use Contrast-Limited Adaptive Histogram Equalization (CLAHE), but with its key parameters tuned automatically via a modified version of the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). This helps to boost local contrast adaptively, keeps important anatomical details intact, and cuts down on noise. We tested the approach on the public LUNA16 dataset, first checking image quality (Peak Signal-to-Noise Ratio (PSNR) around 53 dB and Structural Similarity Index (SSIM) of 0.9, better than standard methods), then training three popular deep models—namely, ResNet-50, EfficientNet-B0, and InceptionV3—with CutMix augmentation for better generalization. On the enhanced images, ResNet-50 achieved up to 99.0% classification accuracy with substantially less FP than when using the raw scans. Taken together, these results demonstrate that intelligent and optimized preprocessing can effectively mitigate intensity variations via deep learning for lung nodule detection, thus coming closer to realizing the practical toolbox of computer-aided diagnosis in routine clinical practice. Full article
Show Figures

Figure 1

42 pages, 1314 KB  
Review
Ginger Bioactives as Multi-Target Therapeutics: Mechanisms, Delivery Innovation, and Human Health Impact
by Pasquale Simeone, Francesca Martina Filannino, Antonia Cianciulli, Maria Ida de Stefano, Melania Ruggiero, Teresa Trotta, Antonella Compierchio, Tarek Benameur, Rosa Calvello, Amal Ferchichi, Chiara Porro and Maria Antonietta Panaro
Nutrients 2026, 18(7), 1079; https://doi.org/10.3390/nu18071079 - 27 Mar 2026
Viewed by 445
Abstract
Background/Objectives: Ginger has a long history as both a culinary and medicinal plant and is widely recognized in traditional medicine for its ability to promote health and well-being. The principal bioactive compounds of ginger are present in fresh and dried forms and [...] Read more.
Background/Objectives: Ginger has a long history as both a culinary and medicinal plant and is widely recognized in traditional medicine for its ability to promote health and well-being. The principal bioactive compounds of ginger are present in fresh and dried forms and have been largely studied for their therapeutic potential. These compounds exhibit a wide range of biological activities mediated through various mechanisms. Advances in nanotechnology have enabled the development of innovative delivery systems, thereby enhancing the bioavailability and therapeutic efficacy of ginger-derived compounds in modern medical applications. Methods: A comprehensive literature review was conducted to evaluate the characteristics of ginger and its potential role in disease prevention. Relevant studies were identified through the main research databases, publication screening, manual reference checks, and author consensus was conducted. Results: This narrative review provides an overview of the therapeutic potential of bioactive compounds in ginger for the management and prevention of cardiovascular, arthritis, neurodegenerative, and gastrointestinal diseases, with particular emphasis on the molecular mechanisms. In addition, their potential anti-aging properties are extensively discussed. The evidence reported is predominantly preclinical (in vitro and in vivo models), with more limited and heterogeneous clinical data. Recent studies have also highlighted the role of artificial intelligence (AI) in accelerating the discovery and evaluation of bioactive agents with therapeutic relevance across diverse biological systems. Conclusions: This review highlights the emerging applications of ginger extracts in human health and suggests their applications in both traditional medicine and contemporary drug discovery. Full article
(This article belongs to the Special Issue Bioactive Ingredients in Plants Related to Human Health—2nd Edition)
Show Figures

Figure 1

10 pages, 232 KB  
Entry
Artificial Intelligence Literacy and Competency in Pre-Service Teacher Education
by Hsiao-Ping Hsu
Encyclopedia 2026, 6(4), 76; https://doi.org/10.3390/encyclopedia6040076 - 27 Mar 2026
Viewed by 463
Definition
Artificial Intelligence (AI) literacy and competency in pre-service teacher education refer to a programme-level implementation that enables teachers to work with AI systems effectively, critically, and ethically across university coursework, school placements, and early-career practice. This includes not only capability, but also professional [...] Read more.
Artificial Intelligence (AI) literacy and competency in pre-service teacher education refer to a programme-level implementation that enables teachers to work with AI systems effectively, critically, and ethically across university coursework, school placements, and early-career practice. This includes not only capability, but also professional enactment, where teachers apply AI-related knowledge in context-sensitive and pedagogically grounded ways. AI literacy refers to a shared knowledge base for understanding how AI systems generate outputs, how to evaluate and verify AI-supported information, and how to reason about task–tool fit in relation to fairness, privacy, transparency, accountability, academic integrity, equity, and environmental sustainability. AI competency refers to the application of this literacy in routine professional tasks, such as designing and justifying AI-informed teaching, learning, and assessment, protecting students’ and school data, documenting decisions, and revising AI-supported materials after checking for reliability, transparency, accountability, and equity. Together, literacy and competency extend beyond personal use of AI by preparing future teachers to support students’ creative, critical, and ethical engagement with AI, while keeping classroom practice aligned with educational goals, objectives, and values. Full article
(This article belongs to the Collection Encyclopedia of Social Sciences)
28 pages, 8905 KB  
Article
A Deep Recurrent Learning Framework for Multi-Class Microgrid Fault Classification Using LSTM and Bi-LSTM Models
by Rakesh Sahu, Pratap Kumar Panigrahi, Deepak Kumar Lal, Rudranarayan Pradhan and Chandrakanta Mahanty
Eng 2026, 7(3), 143; https://doi.org/10.3390/eng7030143 - 23 Mar 2026
Viewed by 233
Abstract
Fault detection in microgrids is a critical element of system stability and uninterrupted power delivery. Herein, a comparative study using LSTM and bidirectional LSTM networks is performed based on three-phase current data for multi-class fault classification. Five major fault types, namely LG, LL, [...] Read more.
Fault detection in microgrids is a critical element of system stability and uninterrupted power delivery. Herein, a comparative study using LSTM and bidirectional LSTM networks is performed based on three-phase current data for multi-class fault classification. Five major fault types, namely LG, LL, LLG, LLL, and LLLG, were simulated using a Real-Time Digital Simulator (RTDS) under grid-connected and islanded modes. Collected current signals were preprocessed, normalized, and segmented for sequence learning. Later, both models were trained using the best hyperparameter setting to enhance their capabilities and classify faults. To measure how well they identified faults, evaluation metrics, like accuracy, precision, recall, F1-score, and ROC-AUC, were calculated. The results revealed that the Bi-LSTM outperformed the LSTM and classical machine learning models consistently, with more than 99% accuracy for most fault types. More importantly, the proposed framework also checked classification performance for LLLG faults, with the Bi-LSTM model having a test accuracy of 98.8%. These results confirm that the Bi-LSTM model can robustly and precisely classify and detect faults in real time within specific phases of microgrids; therefore, it provides a scalable foundation for the development of intelligent protection in smart power systems. Full article
(This article belongs to the Special Issue Artificial Intelligence for Engineering Applications, 2nd Edition)
Show Figures

Graphical abstract

21 pages, 333 KB  
Article
Artificial Truth: Algorithmic Power, Epistemic Authority, and the Crisis of Democratic Knowledge
by Rosario Palese
Societies 2026, 16(3), 102; https://doi.org/10.3390/soc16030102 - 23 Mar 2026
Viewed by 870
Abstract
This article examines how artificial intelligence and algorithmic systems are reconfiguring truth regimes in digital societies, introducing the concept of “Artificial Truth” to describe an emerging form of epistemic governance where knowledge production and validation become infrastructural functions of sociotechnical systems. The study [...] Read more.
This article examines how artificial intelligence and algorithmic systems are reconfiguring truth regimes in digital societies, introducing the concept of “Artificial Truth” to describe an emerging form of epistemic governance where knowledge production and validation become infrastructural functions of sociotechnical systems. The study develops an integrated theoretical framework combining Foucault’s notion of truth regimes, Bourdieu’s theory of symbolic capital and fields, and Actor-Network Theory’s constructivist approach. Through conceptual analysis, the article investigates how algorithmic recommendation systems, generative AI, and automated fact-checking operate as epistemic devices that actively shape what is recognized as credible, authoritative, and true in public discourse. The analysis reveals three fundamental transformations: (1) the restructuring of trust economies, with epistemic authority shifting from institutional expertise to platform-native capital based on engagement metrics and affective proximity; (2) the emergence of generative AI as an epistemic actor producing “synthetic truth” through linguistic fluency rather than propositional understanding; (3) the institutionalization of computational veridiction in algorithmic fact-checking systems that translate situated epistemic judgments into probabilistic classifications presented as neutral. These dynamics configure a regime where truth is evaluated less by correspondence with reality and more by computational plausibility and platform integration. The article’s primary contribution lies in providing a unified theoretical framework for understanding contemporary transformations of epistemic authority, moving beyond disinformation studies to analyze AI as an epistemic actor. By integrating classical sociological perspectives with Science and Technology Studies, it conceptualizes algorithmic systems as epistemic infrastructures that embody specific power relations, restructure symbolic capital economies, and distribute epistemic authority asymmetrically, with profound implications for democratic knowledge, citizen epistemic agency, and public sphere pluralism. Full article
37 pages, 2886 KB  
Article
A Zero-Touch Vulnerability Remediation Framework Based on OpenVAS, Threat Intelligence, and RAG-Enhanced Large Language Models
by Cheng-Hui Hsieh, Chen-Yi Cheng and Yung-Chung Wang
Mathematics 2026, 14(6), 1072; https://doi.org/10.3390/math14061072 - 22 Mar 2026
Viewed by 563
Abstract
Vulnerability disclosures are outpacing manual remediation capacity. We present a Zero-Touch Vulnerability Remediation Framework combining OpenVAS scanning, multi-source threat intelligence, and Large Language Models (LLMs) enhanced through Retrieval-Augmented Generation (RAG). The Scanning Layer normalizes findings into structured JSON; the AI Decision Layer applies [...] Read more.
Vulnerability disclosures are outpacing manual remediation capacity. We present a Zero-Touch Vulnerability Remediation Framework combining OpenVAS scanning, multi-source threat intelligence, and Large Language Models (LLMs) enhanced through Retrieval-Augmented Generation (RAG). The Scanning Layer normalizes findings into structured JSON; the AI Decision Layer applies hybrid FAISS + BM25 retrieval, dual-LLM verification (a primary generator checked by a gpt-4o auxiliary verifier), and confidence-based routing; the Orchestration Layer executes validated patches via CI/CD pipelines with automated rollback. On 350 real-world vulnerability cases across five GPT-family models, the full Prompt + RAG pipeline raised accuracy from 52.0% to 76.7–82.6% (all p < 0.001, Cohen’s h = 0.51–0.68) and reduced hallucination from 23.4% to 7.8%. Confidence routing routed 34.9% of cases to the high-confidence auto-execution tier, yielding a 4.1% rollback rate and zero service outages. The framework addresses the most relevant categories of the OWASP LLM Top 10 and lays groundwork for enterprise-scale, Zero-Touch vulnerability management. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

23 pages, 1290 KB  
Article
Artificial Intelligence and Corporate Sustainability: Evidence from China’s National Artificial Intelligence Innovation and Development Pilot Zone Policy
by Yu Sang, Kannan Loganathan and Lu Lin
Sustainability 2026, 18(6), 3113; https://doi.org/10.3390/su18063113 - 22 Mar 2026
Viewed by 320
Abstract
Artificial intelligence (AI) is increasingly reshaping corporate production and governance, raising the question of how policy can steer corporations toward sustainable development. This study treats the staggered implementation of China’s National Artificial Intelligence Innovation and Development Pilot Zone policy (AI Pilot Zone policy) [...] Read more.
Artificial intelligence (AI) is increasingly reshaping corporate production and governance, raising the question of how policy can steer corporations toward sustainable development. This study treats the staggered implementation of China’s National Artificial Intelligence Innovation and Development Pilot Zone policy (AI Pilot Zone policy) as a quasi-natural experiment. Using data from Chinese listed companies from 2014 to 2024, we employ a multi-period difference-in-differences approach to identify the impact of the policy on corporate sustainable development performance (SDP) and to explore the underlying mechanisms. The results show that the AI Pilot Zone policy significantly improves corporate SDP, and this finding remains robust to a series of checks, including parallel trend tests, placebo tests, PSM-DID estimations, and tests addressing potential biases under staggered policy adoption. Heterogeneity analysis based on the TOE framework indicates that the policy effect is more pronounced among firms with higher R&D intensity, stronger internal control, and those located in regions with higher levels of digital inclusive finance. Mechanism analysis further suggests that dynamic capabilities, including innovation capability, adaptation capability, and absorptive capability, play important mediating roles in the relationship between the policy and corporate SDP. Overall, this study provides micro-level evidence on the sustainability effects of AI-oriented public policies and offers insights for improving policy design and corporate capability development. Full article
(This article belongs to the Topic Artificial Intelligence and Sustainable Development)
Show Figures

Figure 1

15 pages, 671 KB  
Article
Model Checking in Federated Learning-Based Smart Advertising
by Rasool Seyghaly, Jordi Garcia and Xavi Masip-Bruin
J. Sens. Actuator Netw. 2026, 15(2), 29; https://doi.org/10.3390/jsan15020029 - 20 Mar 2026
Viewed by 336
Abstract
As social networks continue to expand, smart advertising increasingly depends on machine learning to deliver personalized and effective advertisements. Federated Learning (FL) is a distributed learning paradigm that supports privacy-preserving advertising by training models locally while avoiding direct sharing of raw user data. [...] Read more.
As social networks continue to expand, smart advertising increasingly depends on machine learning to deliver personalized and effective advertisements. Federated Learning (FL) is a distributed learning paradigm that supports privacy-preserving advertising by training models locally while avoiding direct sharing of raw user data. However, ensuring the correctness, reliability, and operational robustness of FL-driven smart advertising systems remains a significant challenge, particularly in distributed and user-facing environments. In this study, we investigate the use of model checking as a formal verification technique for validating key properties of an FL-based smart advertising workflow in social networks. We combine a structured finite-state modeling approach with Linear Temporal Logic (LTL) specifications and model-checking tools to assess correctness, availability, and baseline privacy requirements. Using controlled simulation-based configurations, we show that, for a setup with 100 users and 20 edge servers, the system delivers advertisements to all users and the global model successfully processes 200 out of 200 requests. We further analyze verification overhead through detection-time measurements, observing an increase in average detection time from 10.05 s to 11.98 s as the number of users rises from 20 to 100. These results indicate that the proposed framework can provide practical assurance for FL-enabled smart advertising workflows, support more reliable deployment in distributed intelligent systems, and improve trustworthiness in real advertising applications. Full article
Show Figures

Graphical abstract

Back to TopTop