Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,958)

Search Parameters:
Keywords = language environment

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 4324 KB  
Review
2000–2025: A Quarter of a Century of Studies on Pet Ownership in the Amazon—Epidemiological Implications for Public Health
by Coline J. Vanderhooft, Eduardo A. Díaz, Carolina Sáenz and Victor Lizana
Pathogens 2026, 15(1), 77; https://doi.org/10.3390/pathogens15010077 (registering DOI) - 10 Jan 2026
Abstract
Anthropogenic pressures in the Amazon Basin are reshaping human–animal–environment interactions and increasing zoonotic disease risk. Within this One Health context, domestic dogs and cats are underrecognized contributors to pathogen circulation at the human–wildlife interface. We conducted a PRISMA-compliant systematic review of zoonotic pathogens [...] Read more.
Anthropogenic pressures in the Amazon Basin are reshaping human–animal–environment interactions and increasing zoonotic disease risk. Within this One Health context, domestic dogs and cats are underrecognized contributors to pathogen circulation at the human–wildlife interface. We conducted a PRISMA-compliant systematic review of zoonotic pathogens reported in companion animals across Amazonian territories in nine countries, including literature published between 2000 and 2025 in four languages. Zoonotic pathogens showed a heterogeneous yet widespread distribution, with parasitic infections, particularly Leishmania spp., Toxoplasma gondii, and vector-borne protozoa, being the most frequently reported. A pronounced geographic bias was evident, with studies concentrated in Brazil and selected areas of the western Amazon, while large portions of the Basin remain understudied. Methodological limitations included reliance on cross-sectional designs and heterogeneous diagnostic approaches, often based solely on serology. These findings highlight the need to strengthen One Health-oriented governance frameworks that integrate animal health surveillance into environmental and public health policies. Priority actions include expanding surveillance to underrepresented regions, harmonizing diagnostic protocols, investing in regional laboratory capacity, and promoting community-based monitoring. Strengthened cross-sectoral and transboundary coordination is essential to reduce zoonotic risk and support evidence-based disease prevention in Amazonian ecosystems. Full article
Show Figures

Figure 1

28 pages, 1344 KB  
Article
Tiny Language Model Guided Flow Q Learning for Optimal Task Scheduling in Fog Computing
by Bhargavi K and Sajjan G. Shiva
Algorithms 2026, 19(1), 60; https://doi.org/10.3390/a19010060 (registering DOI) - 10 Jan 2026
Abstract
Fog computing is one of the rapidly growing platforms with an exponentially increasing demand for real-time data processing. The fog computing market is expected to reach USD 8358 million by the year 2030 with a compound annual growth of 50%. The wide adaptation [...] Read more.
Fog computing is one of the rapidly growing platforms with an exponentially increasing demand for real-time data processing. The fog computing market is expected to reach USD 8358 million by the year 2030 with a compound annual growth of 50%. The wide adaptation of fog computing by the industries worldwide is due to the advantages like reduced latency, high operational efficiency, and high-level data privacy. The highly distributed and heterogeneous nature of fog computing leads to significant challenges related to resource management, data security, task scheduling, data privacy, and interoperability. The task typically represents a job generated by the IoT device. The action indicates the way of executing the tasks whose decision is taken by the scheduler. Task scheduling is one of the prominent issues in fog computing which includes the process of effectively scheduling the tasks among fog devices to effectively utilize the resources and meet the Quality of Service (QoS) requirements of the applications. Improper task scheduling leads to increased execution time, overutilization of resources, data loss, and poor scalability. Hence there is a need to do proper task scheduling to make optimal task distribution decisions in a highly dynamic resource-constrained heterogeneous fog computing environment. Flow Q learning (FQL) is a potential form of reinforcement learning algorithm which uses the flow matching policy for action distribution. It can handle complex forms of data and multimodal action distribution which make it suitable for the highly volatile fog computing environment. However, flow Q learning struggles to achieve a proper trade-off between the expressive flow model and a reduction in the Q function, as it relies on a one-step optimization policy that introduces bias into the estimated Q function value. The Tiny Language Model (TLM) is a significantly smaller form of a Large Language Model (LLM) which is designed to operate over the device-constrained environment. It can provide fair and systematic guidance to disproportionally biased deep learning models. In this paper a novel TLM guided flow Q learning framework is designed to address the task scheduling problem in fog computing. The neutrality and fine-tuning capability of the TLM is combined with the quick generable ability of the FQL algorithm. The framework is simulated using the Simcan2Fog simulator considering the dynamic nature of fog environment under finite and infinite resources. The performance is found to be good with respect to parameters like execution time, accuracy, response time, and latency. Further the results obtained are validated using the expected value analysis method which is found to be satisfactory. Full article
22 pages, 2421 KB  
Article
Application of Large Language Models in the Protection of Industrial IoT Systems for Critical Infrastructure
by Anna Manowska and Jakub Syta
Appl. Sci. 2026, 16(2), 730; https://doi.org/10.3390/app16020730 (registering DOI) - 10 Jan 2026
Abstract
The increasing digitization of critical infrastructure and the increasing use of Industrial Internet of Things (IIoT) systems are leading to a significant increase in the exposure of operating systems to cyber threats. The integration of information (IT) and operational (OT) layers, characteristic of [...] Read more.
The increasing digitization of critical infrastructure and the increasing use of Industrial Internet of Things (IIoT) systems are leading to a significant increase in the exposure of operating systems to cyber threats. The integration of information (IT) and operational (OT) layers, characteristic of today’s industrial environments, results in an increase in the complexity of system architecture and the number of security events that require ongoing analysis. Under such conditions, classic approaches to monitoring and responding to incidents prove insufficient, especially in the context of systems with high reliability and business continuity requirements. The aim of this article is to analyze the possibilities of using Large Language Models (LLMs) in the protection of industrial IoT systems operating in critical infrastructure. The paper analyzes the architecture of industrial automation systems and identifies classes of cyber threat scenarios characteristic of IIoT environments, including availability disruptions, degradation of system operation, manipulation of process data, and supply-chain-based attacks. On this basis, the potential roles of large language models in security monitoring processes are examined, particularly with respect to incident interpretation, correlation of heterogeneous data sources, and contextual analysis under operational constraints. The experimental evaluation demonstrates that, when compared to a rule-based baseline, the LLM-based approach provides consistently improved classification of incident impact and attack vectors across IT, DMZ, and OT segments, while maintaining a low rate of unsupported responses. These results indicate that large language models can complement existing industrial IoT security mechanisms by enhancing context-aware analysis and decision support rather than replacing established detection and monitoring systems. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in the IoT)
Show Figures

Figure 1

17 pages, 1588 KB  
Article
Integrating Contextual Causal Deep Networks and LLM-Guided Policies for Sequential Decision-Making
by Jong-Min Kim
Mathematics 2026, 14(2), 269; https://doi.org/10.3390/math14020269 (registering DOI) - 10 Jan 2026
Abstract
Sequential decision-making is critical for applications ranging from personalized recommendations to resource allocation. This study evaluates three decision policies—Greedy, Thompson Sampling (via Monte Carlo Dropout), and a zero-shot Large Language Model (LLM)-guided policy (Gemini-1.5-Pro)—within a contextual bandit framework. To address covariate shift and [...] Read more.
Sequential decision-making is critical for applications ranging from personalized recommendations to resource allocation. This study evaluates three decision policies—Greedy, Thompson Sampling (via Monte Carlo Dropout), and a zero-shot Large Language Model (LLM)-guided policy (Gemini-1.5-Pro)—within a contextual bandit framework. To address covariate shift and assess subpopulation performance, we utilize a Collective Conditional Diffusion Network (CCDN) where covariates are partitioned into B=10 homogeneous blocks. Evaluating these policies across a high-dimensional treatment space (K=5, resulting in 25=32 actions), we tested performance in a simulated environment and three benchmark datasets: Boston Housing, Wine Quality, and Adult Income. Our results demonstrate that the Greedy strategy achieves the highest Model-Relative Optimal (MRO) coverage, reaching 1.00 in the Wine Quality and Adult Income datasets, though performance drops significantly to 0.05 in the Boston Housing environment. Thompson Sampling maintains competitive regret and, in the Boston Housing dataset, marginally outperforms Greedy in action selection precision. Conversely, the zero-shot LLM-guided policy consistently underperforms in numerical tabular settings, exhibiting the highest median regret and near-zero MRO coverage across most tasks. Furthermore, Wilcoxon tests reveal that differences in empirical outcomes between policies are often not statistically significant (ns), suggesting an optimization ceiling in zero-shot tabular settings. These findings indicate that while traditional model-driven policies are robust, LLM-guided approaches currently lack the numerical precision required for high-dimensional sequential decision-making without further calibration or hybrid integration. Full article
(This article belongs to the Special Issue Computational Methods and Machine Learning for Causal Inference)
40 pages, 2316 KB  
Article
Hybrid Usability Evaluation of an Automotive REM Tool: Human and LLM-Based Heuristic Assessment of IBM Doors Next
by Oana Rotaru, Ciprian Orhei and Radu Vasiu
Appl. Sci. 2026, 16(2), 723; https://doi.org/10.3390/app16020723 (registering DOI) - 9 Jan 2026
Abstract
Requirements Engineering and Management (REM) tools play a significant role in ensuring project compliance and efficiency. Automotive engineering must comply with regulatory standards, requiring detailed documentation, rigorous testing, and solid traceability. Despite their importance, REM tools are underexplored from the usability and user [...] Read more.
Requirements Engineering and Management (REM) tools play a significant role in ensuring project compliance and efficiency. Automotive engineering must comply with regulatory standards, requiring detailed documentation, rigorous testing, and solid traceability. Despite their importance, REM tools are underexplored from the usability and user experience perspective (UX), even though poor usability can hinder development workflows across stakeholder teams. This study presents a case study of heuristic usability evaluation of IBM DOORS Next Generation, conducted with expert evaluators, using Nielsen’s 10 Usability Heuristics as an evaluation framework. The identified issues were analyzed in terms of impacted heuristics and severity ratings. Additionally, we underwent a Large Language Model (LLM)-based heuristic evaluation, using ChatGPT-5, prompted with the same heuristic set and static screenshots. The LLM detected several issues overlapping with human findings (32%), as well as new ones (23%); therefore, 55% of its outputs are considered valid and 45% are unconfirmed. This highlights both the potential and limitations of AI-driven usability assessment. Overall, the findings underscore the usability challenges of REM tools and suggest that LLMs may serve as complementary evaluators, accelerating early-stage heuristic inspections in safety-critical engineering environments. Full article
(This article belongs to the Special Issue Enhancing User Experience in Automation and Control Systems)
29 pages, 2980 KB  
Article
Integrating NLP and Ensemble Learning into Next-Generation Firewalls for Robust Malware Detection in Edge Computing
by Ramahlapane Lerato Moila and Mthulisi Velempini
Sensors 2026, 26(2), 424; https://doi.org/10.3390/s26020424 - 9 Jan 2026
Abstract
As edge computing becomes increasingly central to modern digital infrastructure, it also creates opportunities for sophisticated malware attacks that traditional security systems struggle to address. This study proposes a natural language processing (NLP) framework integrated with ensemble learning into next-generation firewalls (NGFWs) to [...] Read more.
As edge computing becomes increasingly central to modern digital infrastructure, it also creates opportunities for sophisticated malware attacks that traditional security systems struggle to address. This study proposes a natural language processing (NLP) framework integrated with ensemble learning into next-generation firewalls (NGFWs) to detect and mitigate malware attacks in edge computing environments. The approach leverages unstructured threat intelligence (e.g., cybersecurity reports, logs) by applying NLP techniques, such as TF-IDF vectorization, to convert textual data into structured insights. This process uncovers hidden patterns and entity relationships within system logs. By combining Random Forest (RF) and Logistic Regression (LR) in a soft voting ensemble, the proposed model achieves 95% accuracy on a cyber threat intelligence dataset augmented with synthetic data to address class imbalance, and 98% accuracy on the CSE-CIC-IDS2018 dataset. The study was validated using ANOVA to assess statistical robustness and confusion matrix analysis, both of which confirmed low error rates. The system enhances detection rates and adaptability, providing a scalable defense layer optimized for resource-constrained, latency-sensitive edge environments. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

24 pages, 790 KB  
Systematic Review
A Scoping Review of the Barriers to Self-Advocacy for People with Intellectual Disability in Bronfenbrenner’s Process–Person–Context–Time (PPCT) Model
by Christina DeCostanza Eagle, Grace L. Francis, Kelly Conn-Reda, Kristen Haynor, Sarah H. Espanol, Jodi Duke, Jill A. Hunt, Emil Majetich and Timothy J. Eagle
Educ. Sci. 2026, 16(1), 97; https://doi.org/10.3390/educsci16010097 - 8 Jan 2026
Abstract
Self-determination and self-advocacy are critical components of quality of life, and the instruction of these skills continues to emerge as an important outcome for Disabled people, specifically people with intellectual disability (ID). The purpose of this scoping review examined the perspectives of adults [...] Read more.
Self-determination and self-advocacy are critical components of quality of life, and the instruction of these skills continues to emerge as an important outcome for Disabled people, specifically people with intellectual disability (ID). The purpose of this scoping review examined the perspectives of adults with ID and the experienced barriers when self-advocating, making choices, and goal setting. This scoping review searched multiple databases for empirical research, which collected the perspective of people with intellectual disability and what they identified as barriers to self-advocacy. Results included 30 articles with an international perspective available within the English language. The authors utilized Bronfenbrenner’s process–person–context–time (PPCT) model to identify how these barriers are experienced in various relationships and environments and throughout time. The barriers identified fell into the various aspects of the PPCT model. Understanding these barriers provides insights into ways to begin to dismantle them, and this review details recommendations for research, policy, and practices. Full article
(This article belongs to the Special Issue Collaborative and Resilience-Oriented Practices and Teacher Wellbeing)
Show Figures

Figure 1

16 pages, 521 KB  
Article
Email Communication in English-Medium Instruction: Cultural and Gender Differences in Student Requests to Professors
by Seung-eun Sung, Robert O. Davis, Joseph Vincent and Yong-Jik Lee
Educ. Sci. 2026, 16(1), 96; https://doi.org/10.3390/educsci16010096 - 8 Jan 2026
Abstract
This study examined how cultural background and self-reported gender influence student–faculty email communication in English-Medium Instruction (EMI) settings. Advanced international language learners (N = 113) wrote emails in English to either Korean or international professors without prior instruction. The emails were analyzed for [...] Read more.
This study examined how cultural background and self-reported gender influence student–faculty email communication in English-Medium Instruction (EMI) settings. Advanced international language learners (N = 113) wrote emails in English to either Korean or international professors without prior instruction. The emails were analyzed for framing elements and request strategies using holistic assessment. The findings revealed significant patterns in formality and strategy use based on professor nationality and student gender. Emails to Korean professors exhibited higher formality levels, especially among students with better framing appropriateness scores. Cultural differences emerged in request strategies: international students favored performative requests, while Korean students preferred disarmers. Self-reported gender also correlated with different framing strategies, particularly when communicating with Korean professors. These findings highlight the complex interaction among culture, gender, and pragmatic awareness in EMI academic correspondence. The study underscores the importance of understanding cross-cultural communication patterns in diverse educational environments and suggests the need for further research into multilingual communication practices in higher education to better support international student populations. Full article
(This article belongs to the Special Issue Critical Issues of English for Academic Purposes in Higher Education)
20 pages, 4726 KB  
Article
Enhancing SeeGround with Relational Depth Text for 3D Visual Grounding
by Hyun-Sik Jeon, Seong-Hui Kang and Jong-Eun Ha
Appl. Sci. 2026, 16(2), 652; https://doi.org/10.3390/app16020652 - 8 Jan 2026
Viewed by 42
Abstract
Three-dimensional visual grounding is a core technology that identifies specific objects within complex 3D scenes based on natural language instructions, enhancing human–machine interactions in robotics and augmented reality domains. Traditional approaches have focused on supervised learning, which relies on annotated data; however, zero-shot [...] Read more.
Three-dimensional visual grounding is a core technology that identifies specific objects within complex 3D scenes based on natural language instructions, enhancing human–machine interactions in robotics and augmented reality domains. Traditional approaches have focused on supervised learning, which relies on annotated data; however, zero-shot methodologies are emerging due to the high costs of data construction and limitations in generalization. SeeGround achieves state-of-the-art performance by integrating 2D rendered images and spatial text descriptions. Nevertheless, SeeGround exhibits vulnerabilities in clearly discerning relative depth relationships owing to its implicit depth representations in 2D views. This study proposes the relational depth text (RDT) technique to overcome these limitations, utilizing a Monocular Depth Estimation model to extract depth maps from rendered 2D images and applying the K-Nearest Neighbors algorithm to convert inter-object relative depth relations into natural language descriptions, thereby incorporating them into Vision–Language Model (VLM) prompts. This method distinguishes itself by augmenting spatial reasoning capabilities while preserving SeeGround’s existing pipeline, demonstrating a 3.54% improvement in the Acc@0.25 metric on the Nr3D dataset in a 7B VLM environment that is approximately 10.3 times lighter than the original model, along with a 6.74% increase in Unique cases on the ScanRefer dataset, albeit with a 1.70% decline in Multiple cases. The proposed technique enhances the robustness of grounding through viewpoint anchoring and candidate discrimination in complex query scenarios, and is expected to improve efficiency in practical applications through future multi-view fusion and conditional execution optimizations. Full article
(This article belongs to the Special Issue Advances in Computer Graphics and 3D Technologies)
Show Figures

Figure 1

28 pages, 515 KB  
Review
From Cues to Engagement: A Comprehensive Survey and Holistic Architecture for Computer Vision-Based Audience Analysis in Live Events
by Marco Lemos, Pedro J. S. Cardoso and João M. F. Rodrigues
Multimodal Technol. Interact. 2026, 10(1), 8; https://doi.org/10.3390/mti10010008 - 8 Jan 2026
Viewed by 35
Abstract
The accurate measurement of audience engagement in real-world live events remains a significant challenge, with the majority of existing research confined to controlled environments like classrooms. This paper presents a comprehensive survey of Computer Vision AI-driven methods for real-time audience engagement monitoring and [...] Read more.
The accurate measurement of audience engagement in real-world live events remains a significant challenge, with the majority of existing research confined to controlled environments like classrooms. This paper presents a comprehensive survey of Computer Vision AI-driven methods for real-time audience engagement monitoring and proposes a novel, holistic architecture to address this gap, with this architecture being the main contribution of the paper. The paper identifies and defines five core constructs essential for a robust analysis: Attention, Emotion and Sentiment, Body Language, Scene Dynamics, and Behaviours. Through a selective review of state-of-the-art techniques for each construct, the necessity of a multimodal approach that surpasses the limitations of isolated indicators is highlighted. The work synthesises a fragmented field into a unified taxonomy and introduces a modular architecture that integrates these constructs with practical, business-oriented metrics such as Commitment, Conversion, and Retention. Finally, by integrating cognitive, affective, and behavioural signals, this work provides a roadmap for developing operational systems that can transform live event experience and management through data-driven, real-time analytics. Full article
Show Figures

Figure 1

20 pages, 945 KB  
Article
A Pilot Study on Multilingual Detection of Irregular Migration Discourse on X and Telegram Using Transformer-Based Models
by Dimitrios Taranis, Gerasimos Razis and Ioannis Anagnostopoulos
Electronics 2026, 15(2), 281; https://doi.org/10.3390/electronics15020281 - 8 Jan 2026
Viewed by 95
Abstract
The rise of Online Social Networks has reshaped global discourse, enabling real-time conversations on complex issues such as irregular migration. Yet the informal, multilingual, and often noisy nature of content on platforms like X (formerly Twitter) and Telegram presents significant challenges for reliable [...] Read more.
The rise of Online Social Networks has reshaped global discourse, enabling real-time conversations on complex issues such as irregular migration. Yet the informal, multilingual, and often noisy nature of content on platforms like X (formerly Twitter) and Telegram presents significant challenges for reliable automated analysis. This study presents an exploratory multilingual natural language processing (NLP) framework for detecting irregular migration discourse across five languages. Conceived as a pilot study addressing extreme data scarcity in sensitive migration contexts, this work evaluates transformer-based models on a curated multilingual corpus. It provides an initial baseline for monitoring informal migration narratives on X and Telegram. We evaluate a broad range of approaches, including traditional machine learning classifiers, SetFit sentence-embedding models, fine-tuned multilingual BERT (mBERT) transformers, and a Large Language Model (GPT-4o). The results show that GPT-4o achieves the highest performance overall (F1-score: 0.84), with scores reaching 0.89 in French and 0.88 in Greek. While mBERT excels in English, SetFit outperforms mBERT in low-resource settings, specifically in Arabic (0.79 vs. 0.70) and Greek (0.88 vs. 0.81). The findings highlight the effectiveness of transformer-based and large-language-model approaches, particularly in low-resource or linguistically heterogeneous environments. Overall, the proposed framework provides an initial, compact benchmark for multilingual detection of irregular migration discourse under extreme, low-resource conditions. The results should be viewed as exploratory indicators of model behavior on this synthetic, small-scale corpus, not as statistically generalizable evidence or deployment-ready tools. In this context, “multilingual” refers to robustness across different linguistic realizations of identical migration narratives under translation, rather than coverage of organically diverse multilingual public discourse. Full article
(This article belongs to the Special Issue Artificial Intelligence-Driven Emerging Applications)
Show Figures

Figure 1

46 pages, 4883 KB  
Article
Mapping the Role of Artificial Intelligence and Machine Learning in Advancing Sustainable Banking
by Alina Georgiana Manta, Claudia Gherțescu, Roxana Maria Bădîrcea, Liviu Florin Manta, Jenica Popescu and Mihail Olaru
Sustainability 2026, 18(2), 618; https://doi.org/10.3390/su18020618 - 7 Jan 2026
Viewed by 118
Abstract
The convergence of artificial intelligence (AI), machine learning (ML), blockchain, and big data analytics is transforming the governance, sustainability, and resilience of modern banking ecosystems. This study provides a multivariate bibliometric analysis using Principal Component Analysis (PCA) of research indexed in Scopus and [...] Read more.
The convergence of artificial intelligence (AI), machine learning (ML), blockchain, and big data analytics is transforming the governance, sustainability, and resilience of modern banking ecosystems. This study provides a multivariate bibliometric analysis using Principal Component Analysis (PCA) of research indexed in Scopus and Web of Science to explore how decentralized digital infrastructures and AI-driven analytical capabilities contribute to sustainable financial development, transparent governance, and climate-resilient digital societies. Findings indicate a rapid increase in interdisciplinary work integrating Distributed Ledger Technology (DLT) with large-scale data processing, federated learning, privacy-preserving computation, and intelligent automation—tools that can enhance financial inclusion, regulatory integrity, and environmental risk management. Keyword network analyses reveal blockchain’s growing role in improving data provenance, security, and trust—key governance dimensions for sustainable and resilient financial systems—while AI/ML and big data analytics dominate research on predictive intelligence, ESG-related risk modeling, customer well-being analytics, and real-time decision support for sustainable finance. Comparative analyses show distinct emphases: Web of Science highlights decentralized architectures, consensus mechanisms, and smart contracts relevant to transparent financial governance, whereas Scopus emphasizes customer-centered analytics, natural language processing, and high-throughput data environments supporting inclusive and equitable financial services. Patterns of global collaboration demonstrate strong internationalization, with Europe, China, and the United States emerging as key hubs in shaping sustainable and digitally resilient banking infrastructures. By mapping intellectual, technological, and collaborative structures, this study clarifies how decentralized intelligence—enabled by the fusion of AI/ML, blockchain, and big data—supports secure, scalable, and sustainability-driven financial ecosystems. The results identify critical research pathways for strengthening financial governance, enhancing climate and social resilience, and advancing digital transformation, which contributes to more inclusive, equitable, and sustainable societies. Full article
Show Figures

Figure 1

32 pages, 1534 KB  
Article
Causal Reasoning and Large Language Models for Military Decision-Making: Rethinking the Command Structures in the Era of Generative AI
by Dimitrios Doumanas, Andreas Soularidis and Konstantinos Kotis
AI 2026, 7(1), 14; https://doi.org/10.3390/ai7010014 - 7 Jan 2026
Viewed by 168
Abstract
Military decision-making is inherently complex and highly critical, requiring commanders to assess multiple variables in real-time, anticipate second-order effects, and adapt strategies based on continuously evolving battlefield conditions. Traditional approaches rely on domain expertise, experience, and intuition, often supported by decision-support systems designed [...] Read more.
Military decision-making is inherently complex and highly critical, requiring commanders to assess multiple variables in real-time, anticipate second-order effects, and adapt strategies based on continuously evolving battlefield conditions. Traditional approaches rely on domain expertise, experience, and intuition, often supported by decision-support systems designed by military experts. With the rapid advancement of Large Language Models (LLMs) such as ChatGPT, Claude, and DeepSeek, a new research question emerges: can LLMs perform causal reasoning at a level that could meaningfully replace human decision-makers, or should they remain human-led decision-support tools in high-stakes environments? This paper explores the causal reasoning capabilities of LLMs for operational and strategic military decisions. Unlike conventional AI models that rely primarily on correlation-based predictions, LLMs are now able to engage in multi-perspective reasoning, intervention analysis, and scenario-based assessments. We introduce a structured empirical evaluation framework to assess LLM performance through 10 de-identified real-world-inspired battle scenarios, ensuring models reason over provided inputs rather than memorized data. Critically, LLM outputs are systematically compared against a human expert baseline, composed of military officers across multiple ranks and years of operational experience. The evaluation focuses on precision, recall, causal reasoning depth, adaptability, and decision soundness. Our findings provide a rigorous comparative assessment of whether carefully prompted LLMs can assist, complement, or approach expert-level performance in military planning. While fully autonomous AI-led command remains premature, the results suggest that LLMs can offer valuable support in complex decision processes when integrated as part of hybrid human-AI decision-support frameworks. Since our evaluation directly tests this capability, this paradigm shift raises fundamental question: Is there a possibility to fully replace high-ranking officers/commanders in leading critical military operations, or should AI-driven tools remain as decision-support systems enhancing human-driven battlefield strategies? Full article
Show Figures

Figure 1

24 pages, 2088 KB  
Systematic Review
Natural Language Processing (NLP)-Based Frameworks for Cyber Threat Intelligence and Early Prediction of Cyberattacks in Industry 4.0: A Systematic Literature Review
by Majed Albarrak, Konstantinos Salonitis and Sandeep Jagtap
Appl. Sci. 2026, 16(2), 619; https://doi.org/10.3390/app16020619 - 7 Jan 2026
Viewed by 86
Abstract
This study provides a systematic overview of Natural Language Processing (NLP)-based frameworks for Cyber Threat Intelligence (CTI) and the early prediction of cyberattacks in Industry 4.0. As digital transformation accelerates through the integration of IoT, SCADA, and cyber-physical systems, manufacturing environments face an [...] Read more.
This study provides a systematic overview of Natural Language Processing (NLP)-based frameworks for Cyber Threat Intelligence (CTI) and the early prediction of cyberattacks in Industry 4.0. As digital transformation accelerates through the integration of IoT, SCADA, and cyber-physical systems, manufacturing environments face an expanding and complex cyber threat landscape. Following the PRISMA 2020 systematic review protocol, 80 peer-reviewed studies published between 2015 and 2025 were analyzed across IEEE Xplore, Scopus, and Web of Science to identify methods that employ NLP for CTI extraction, reasoning, and predictive modelling. The review finds that transformer-based architectures, knowledge graph reasoning, and social media mining are increasingly used to convert unstructured data into actionable intelligence, thereby enabling earlier detection and forecasting of cyber threats. Large Language Models (LLMs) demonstrate strong potential for anticipating attack sequences, while domain-specific models enhance industrial relevance. Persistent challenges include data scarcity, domain adaptation, explainability, and real-time scalability in operational-technology environments. The review concludes that NLP is reshaping Industry 4.0 cybersecurity from reactive defense toward predictive, adaptive, and intelligence-driven protection, and it highlights the need for interpretable, domain-specific, and resource-efficient frameworks to secure Industry 4.0 ecosystems. Full article
(This article belongs to the Special Issue Advances in Cyber Security)
Show Figures

Figure 1

37 pages, 1157 KB  
Review
Deploying LLM Transformer on Edge Computing Devices: A Survey of Strategies, Challenges, and Future Directions
by Endah Kristiani, Vinod Kumar Verma and Chao-Tung Yang
AI 2026, 7(1), 15; https://doi.org/10.3390/ai7010015 - 7 Jan 2026
Viewed by 100
Abstract
The intersection of edge computing, Large Language Models (LLMs), and the Transformer architecture is a very active and fascinating area of research. The core tension is that LLMs, which are built on the Transformer architecture, are massive and computationally intensive, while edge devices [...] Read more.
The intersection of edge computing, Large Language Models (LLMs), and the Transformer architecture is a very active and fascinating area of research. The core tension is that LLMs, which are built on the Transformer architecture, are massive and computationally intensive, while edge devices are resource-constrained in terms of power, memory, and processing capabilities. Therefore, LLMs based on the Transformer architecture are inherently unsuitable for edge computing in their original, full-sized form. They were designed for powerful, resource-rich cloud data centers. However, there is a massive and growing effort to make them suitable for edge devices. Implementing Transformer-based LLMs on edge computing devices is a complex but crucial task that requires a multi-faceted strategy. This paper reviews LLM deployment strategies for Transformer models on edge computing devices, examines the challenges, and estimates future directions. To address these challenges, researchers are exploring methods to compress LLMs and optimize their inference capabilities, making them more efficient for edge environments. Recent advancements in compact LLMs have shown promise in enhancing their deployment on edge devices, enabling improved performance while addressing the limitations of traditional models. This approach not only reduces computational costs but also enhances user privacy and security. Full article
Show Figures

Figure 1

Back to TopTop