Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (657)

Search Parameters:
Keywords = logic programming

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 446 KB  
Article
Algorithms for Solving Systems of Boolean Equations Based on the Transformation of Logical Expressions
by Anvar Kabulov, Alimdzhan Babadzhanov, Abdussattar Baizhumanov, Islambek Saymanov and Akbarjon Babadjanov
Mathematics 2026, 14(4), 594; https://doi.org/10.3390/math14040594 - 8 Feb 2026
Viewed by 37
Abstract
This manuscript proves specific theorems for transforming Boolean expressions of logical formulas when moving from one basis to another, simplifying the solution of complex equations, especially for cryptographic applications. The paper develops methods for solving specific nonlinear systems of Boolean equations used in [...] Read more.
This manuscript proves specific theorems for transforming Boolean expressions of logical formulas when moving from one basis to another, simplifying the solution of complex equations, especially for cryptographic applications. The paper develops methods for solving specific nonlinear systems of Boolean equations used in cryptographic S-boxes using transformations to simpler forms, such as disjunctive normal forms (DNFs) and Zhegalkin polynomials. The main contributions include a mathematical basis for transforming formulas, a complexity-reducing grouping method, and the RLSY program for practical implementation. A rigorous theory, cryptographic relevance, and a detailed description of the algorithm are proposed. The grouping method reduces the system complexity by a factor of 211, as shown in a test example, improving computational efficiency. A solution to a special class of systems of nonlinear Boolean equations of the second degree, which are a logical model of algebraic cryptanalysis, is also proposed. Test examples of logical formula transformations are given. Full article
42 pages, 14082 KB  
Article
Remote Laboratory Based on FPGA Devices Using the E-Learning Approach
by Victor H. García Ortega, Josefina Bárcenas López and Enrique Ruiz-Velasco Sánchez
Appl. Syst. Innov. 2026, 9(2), 37; https://doi.org/10.3390/asi9020037 - 31 Jan 2026
Viewed by 191
Abstract
Laboratories across educational levels have traditionally required in-person attendance, limiting practical activities to specific times and physical spaces. This paper presents a technological architecture based on a system-on-chip (SoC) and a connectivist model, grounded in Connectivism Learning Theory, for implementing a remote laboratory [...] Read more.
Laboratories across educational levels have traditionally required in-person attendance, limiting practical activities to specific times and physical spaces. This paper presents a technological architecture based on a system-on-chip (SoC) and a connectivist model, grounded in Connectivism Learning Theory, for implementing a remote laboratory in digital logic design using FPGA devices. The architecture leverages an Internet-of-Things (IoT) environment to provide applications and servers that enable remote access, programming, manipulation, and visualization of FPGA-based development boards located in the institution’s laboratory, from anywhere and at any time. The connectivist model allows learners to interact with multiple nodes for attending synchronous classes, performing laboratory exercises, managing the remote laboratory, and accessing educational resources asynchronously. This approach aims to enhance learning, knowledge transfer, and skills development. A four-year evaluation was conducted, including one experimental group using an e-learning approach and three in-person control groups from a Digital Logic Design course. The experimental group achieved an average performance score of 9.777, surpassing the control groups, suggesting improved academic outcomes with the proposed system. Additionally, a Technology Acceptance Model-based survey showed very high acceptance among learners. This paper presents a novel connectivist model, which we call the Massive Open Online Laboratory. Full article
21 pages, 3332 KB  
Article
MPC-Coder: A Dual-Knowledge Enhanced Multi-Agent System with Closed-Loop Verification for PLC Code Generation
by Yinggang Zhang, Weiyi Xia, Ben Zhao, Tongwen Yuan and Xianchuan Yu
Symmetry 2026, 18(2), 248; https://doi.org/10.3390/sym18020248 - 30 Jan 2026
Viewed by 223
Abstract
Industrial PLC programming faces persistent difficulties: lengthy development cycles, low fault tolerance, and cross-platform incompatibility among vendors. While LLMs show promise for automated code generation, their direct application is hindered by the gap between ambiguous natural language and the strict determinism required by [...] Read more.
Industrial PLC programming faces persistent difficulties: lengthy development cycles, low fault tolerance, and cross-platform incompatibility among vendors. While LLMs show promise for automated code generation, their direct application is hindered by the gap between ambiguous natural language and the strict determinism required by control logic. This paper proposes MPC-Coder, a dual-knowledge enhanced multi-agent system that addresses this gap. The system combines a structured knowledge graph that imposes hard constraints on process parameters and equipment specifications with a vector database that offers implementation references such as code templates and function blocks. These two knowledge sources form a symmetric complementary architecture. A closed-loop “generation–verification–repair” mechanism leverages formal verification tools to iteratively refine the generated code. Experiments demonstrate that MPC-Coder achieves 100% syntactic correctness and 78% functional consistency, significantly outperforming general-purpose LLMs. The results indicate that the complementary fusion of domain knowledge and closed-loop verification effectively enhances the reliability of code generation, offering a viable technical pathway for the reliable application of LLMs in industrial control systems. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

38 pages, 6097 KB  
Article
A Modular ROS–MARL Framework for Cooperative Multi-Robot Task Allocation in Construction Digital Environments
by Xinghui Xu, Samuel A. Prieto and Borja García de Soto
Buildings 2026, 16(3), 539; https://doi.org/10.3390/buildings16030539 - 28 Jan 2026
Viewed by 337
Abstract
The deployment of autonomous robots in construction remains constrained by the complexity and variability of real-world environments. Conventional programming and single-agent approaches lack the adaptability required for dynamic multi-robot operating conditions, underscoring the need for cooperative, learning-based systems. This paper presents an ROS-based [...] Read more.
The deployment of autonomous robots in construction remains constrained by the complexity and variability of real-world environments. Conventional programming and single-agent approaches lack the adaptability required for dynamic multi-robot operating conditions, underscoring the need for cooperative, learning-based systems. This paper presents an ROS-based modular framework that integrates Multi-Agent Reinforcement Learning (MARL) into a generic 2D simulation and execution pipeline for cooperative mobile robots in construction-oriented digital environments to enable adaptive task allocation and coordinated execution without predefined datasets or manual scheduling. The framework adopts a centralized-training, decentralized-execution (CTDE) scheme based on Multi-Agent Proximal Policy Optimization (MAPPO) and decomposes the system into interchangeable modules for environment modelling, task representation, robot interfaces, and learning, allowing different layouts, task sets, and robot models to be instantiated without redesigning the core architecture. Validation through an ROS-based 2D simulation and real-world experiments using TurtleBot3 robots demonstrated effective task scheduling, adaptive navigation, and cooperative behavior under uncertainty. In simulation, the learned MAPPO policy is benchmarked against non-learning baselines for multi-robot task allocation, and in real-robot experiments, the same policy is evaluated to quantify and discuss the performance gap between simulated and physical execution. Rather than presenting a complete construction-site deployment, this first study focuses on proposing and validating a reusable MARL–ROS framework and digital testbed for multi-robot task allocation in construction-oriented digital environments. The results show that the framework supports effective cooperative task scheduling, adaptive navigation, and logic-consistent behavior, while highlighting practical issues that arise in sim-to-real transfer. Overall, the framework provides a reusable digital foundation and benchmark for studying adaptive and cooperative multi-robot systems in construction-related planning and management contexts. Full article
(This article belongs to the Special Issue Robotics, Automation and Digitization in Construction)
Show Figures

Figure 1

29 pages, 2186 KB  
Article
Insights for Curriculum-Oriented Instruction of Programming Paradigms for Non-Computer Science Majors: Survey and Public Q&A Evidence
by Ji-Hye Oh and Hyun-Seok Park
Appl. Sci. 2026, 16(3), 1191; https://doi.org/10.3390/app16031191 - 23 Jan 2026
Viewed by 167
Abstract
This study examines how different programming paradigms are associated with learning experiences and cognitive challenges as encountered by non-computer science novice learners. Using a case-study approach situated within specific instructional contexts, we integrate survey data from undergraduate students with large-scale public question-and-answer data [...] Read more.
This study examines how different programming paradigms are associated with learning experiences and cognitive challenges as encountered by non-computer science novice learners. Using a case-study approach situated within specific instructional contexts, we integrate survey data from undergraduate students with large-scale public question-and-answer data from Stack Overflow to explore paradigm-related difficulty patterns. Four instructional contexts—C, Java, Python, and Prolog—were examined as pedagogical instantiations of imperative, object-oriented, functional-style, and logic-based paradigms using text clustering, word embedding models, and interaction-informed complexity metrics. The analysis identifies distinct patterns of learning challenges across paradigmatic contexts, including difficulties related to low-level memory management in C-based instruction, abstraction and design reasoning in object-oriented contexts, inference-driven reasoning in Prolog-based instruction, and recursion-related challenges in functional-style programming tasks. Survey responses exhibit tendencies that are broadly consistent with patterns observed in public Q&A data, supporting the use of large-scale community-generated content as a complementary source for learner-centered educational analysis. Based on these findings, the study discusses paradigm-aware instructional implications for programming education tailored to non-major learners within comparable educational settings. The results provide empirical support for differentiated instructional approaches and offer evidence-informed insights relevant to curriculum-oriented teaching and future research on adaptive learning systems. Full article
Show Figures

Figure 1

30 pages, 454 KB  
Article
Bell–CHSH Under Setting-Dependent Selection: Sharp Total-Variation Bounds and an Experimental Audit Protocol
by Parker Emmerson (Yaohushuason)
Quantum Rep. 2026, 8(1), 8; https://doi.org/10.3390/quantum8010008 - 23 Jan 2026
Viewed by 384
Abstract
Bell–CHSH is an inequality about unconditional expectations: under measurement independence, Bell locality, and bounded outcomes, the CHSH value satisfies S2. Experimental correlators, however, are often computed on an accepted subset of trials defined by detection logic, coincidence matching, quality cuts, [...] Read more.
Bell–CHSH is an inequality about unconditional expectations: under measurement independence, Bell locality, and bounded outcomes, the CHSH value satisfies S2. Experimental correlators, however, are often computed on an accepted subset of trials defined by detection logic, coincidence matching, quality cuts, and analysis windows. We model this by an acceptance probability γ(a,b,λ)[0,1] and the resulting accepted hidden-variable law νab obtained by weighting the measurement-independent prior ρ by γ and renormalizing. If νab depends on the setting pair then the four correlators entering CHSH are expectations under four different measures, and a Bell-local measurement-independent model can yield Sobs>2 by selection alone. We quantify the required setting dependence in total variation (TV) distance. For any reference law μ we prove the sharp bound Sobs2+2qQTV(νq,μ) for a CHSH quartet Q. Optimizing over μ yields the intrinsic dispersion bound Sobs2+2ΔQ, and, in particular, Sobsmin{4,2+6DQ}, where DQ is the quartet TV diameter. The constants are optimal. Consequently, reproducing Tsirelson’s value 22 within Bell-local measurement-independent models via setting-dependent acceptance requires ΔQ21 (hence, DQ(21)/3). We then propose a two-lane experimental audit protocol: (i) prior-relative fair-sampling diagnostics using tags recorded on all trials, and (ii) prior-free dispersion diagnostics using accepted-tag distributions across settings, with ΔQ,X computable by linear programming on finite tag alphabets. Full article
Show Figures

Graphical abstract

36 pages, 1519 KB  
Review
Thinking Machines: Mathematical Reasoning in the Age of LLMs
by Andrea Asperti, Alberto Naibo and Claudio Sacerdoti Coen
Big Data Cogn. Comput. 2026, 10(1), 38; https://doi.org/10.3390/bdcc10010038 - 22 Jan 2026
Viewed by 433
Abstract
Large Language Models (LLMs) have demonstrated impressive capabilities in structured reasoning and symbolic tasks, with coding emerging as a particularly successful application. This progress has naturally motivated efforts to extend these models to mathematics, both in its traditional form, expressed through natural-style mathematical [...] Read more.
Large Language Models (LLMs) have demonstrated impressive capabilities in structured reasoning and symbolic tasks, with coding emerging as a particularly successful application. This progress has naturally motivated efforts to extend these models to mathematics, both in its traditional form, expressed through natural-style mathematical language, and in its formalized counterpart, expressed in a symbolic syntax suitable for automatic verification. Yet, despite apparent parallels between programming and proof construction, advances in formalized mathematics have proven significantly more challenging. This gap raises fundamental questions about the nature of reasoning in current LLM architectures, the role of supervision and feedback, and the extent to which such models maintain an internal notion of computational or deductive state. In this article, we review the current state-of-the-art in mathematical reasoning with LLMs, focusing on recent models and benchmarks. We explore three central issues at the intersection of machine learning and mathematical cognition: (i) the trade-offs between traditional and formalized mathematics as training and evaluation domains; (ii) the structural and methodological reasons why proof synthesis remains more brittle than code generation; and (iii) whether LLMs genuinely represent or merely emulate a notion of evolving logical state. Our goal is not to draw rigid distinctions but to clarify the present boundaries of these systems and outline promising directions for their extension. Full article
Show Figures

Figure 1

32 pages, 4251 KB  
Article
Context-Aware ML/NLP Pipeline for Real-Time Anomaly Detection and Risk Assessment in Cloud API Traffic
by Aziz Abibulaiev, Petro Pukach and Myroslava Vovk
Mach. Learn. Knowl. Extr. 2026, 8(1), 25; https://doi.org/10.3390/make8010025 - 22 Jan 2026
Viewed by 326
Abstract
We present a combined ML/NLP (Machine Learning, Natural Language Processing) pipeline for protecting cloud-based APIs (Application Programming Interfaces), which works both at the level of individual HTTP (Hypertext Transfer Protocol) requests and at the access log file reading mode, linking explicitly technical anomalies [...] Read more.
We present a combined ML/NLP (Machine Learning, Natural Language Processing) pipeline for protecting cloud-based APIs (Application Programming Interfaces), which works both at the level of individual HTTP (Hypertext Transfer Protocol) requests and at the access log file reading mode, linking explicitly technical anomalies with business risks. The system processes each event/access log through parallel numerical and textual branches: a set of anomaly detectors trained on traffic engineering characteristics and a hybrid NLP stack that combines rules, TF-IDF (Term Frequency-Inverse Document Frequency), and character-level models trained on enriched security datasets. Their results are integrated using a risk-aware policy that takes into account endpoint type, data sensitivity, exposure, and authentication status, and creates a discrete risk level with human-readable explanations and recommended SOC (Security Operations Center) actions. We implement this design as a containerized microservice pipeline (input, preprocessing, ML, NLP, merging, alerting, and retraining services), orchestrated using Docker Compose and instrumented using OpenSearch Dashboards. Experiments with OWASP-like (Open Worldwide Application Security Project) attack scenarios show a high detection rate for injections, SSRF (Server-Side Request Forgery), Data Exposure, and Business Logic Abuse, while the processing time for each request remains within real-time limits even in sequential testing mode. Thus, the pipeline bridges the gap between ML/NLP research for security and practical API protection channels that can evolve over time through feedback and retraining. Full article
(This article belongs to the Section Safety, Security, Privacy, and Cyber Resilience)
Show Figures

Figure 1

34 pages, 11900 KB  
Article
Influence of Bloat Control on Relocation Rules Automatically Designed via Genetic Programming
by Tena Škalec and Marko Đurasević
Biomimetics 2026, 11(1), 83; https://doi.org/10.3390/biomimetics11010083 - 21 Jan 2026
Viewed by 208
Abstract
The container relocation problem (CRP) is a critical optimisation problem in maritime port operations, in which efficient container handling is essential for maximising terminal throughput. Relocation rules (RRs) are a widely adopted solution approach for the CRP, particularly in online and dynamic environments, [...] Read more.
The container relocation problem (CRP) is a critical optimisation problem in maritime port operations, in which efficient container handling is essential for maximising terminal throughput. Relocation rules (RRs) are a widely adopted solution approach for the CRP, particularly in online and dynamic environments, as they enable fast, rule-based decision-making. However, the manual design of effective relocation rules is both time-consuming and highly dependent on problem-specific characteristics. To overcome this limitation, genetic programming (GP), a bio-inspired optimisation technique grounded in the principles of natural evolution, has been employed to automatically generate RRs. By emulating evolutionary processes such as selection, recombination, and mutation, GP can explore large heuristic search spaces and often produces rules that outperform manually designed alternatives. Despite these advantages and their inherently white-box nature, GP-generated relocation rules frequently exhibit excessive complexity, which hinders their interpretability and limits insight into the underlying decision logic. Motivated by the biomimetic observation that evolutionary systems tend to favour compact and efficient structures, this study investigates two mechanisms for controlling rule complexity, parsimony pressure, and solution pruning, and it analyses their effects on both the quality and size of relocation rules evolved by GP. The results demonstrate that substantial reductions in rule size can be achieved with only minor degradation in performance, measured as the number of relocated containers, highlighting a favourable trade-off between heuristic simplicity and solution quality. This enables the derivation of simpler and more interpretable heuristics while maintaining competitive performance, which is particularly valuable in operational settings where human planners must understand, trust, and potentially adjust automated decision rules. Full article
Show Figures

Graphical abstract

34 pages, 2207 KB  
Article
Neuro-Symbolic Verification for Preventing LLM Hallucinations in Process Control
by Boris Galitsky and Alexander Rybalov
Processes 2026, 14(2), 322; https://doi.org/10.3390/pr14020322 - 16 Jan 2026
Viewed by 513
Abstract
Large Language Models (LLMs) are increasingly used in industrial monitoring and decision support, yet they remain prone to process-control hallucinations—diagnoses and explanations that sound plausible but conflict with physical constraints, sensor data, or plant dynamics. This paper investigates hallucination as a failure of [...] Read more.
Large Language Models (LLMs) are increasingly used in industrial monitoring and decision support, yet they remain prone to process-control hallucinations—diagnoses and explanations that sound plausible but conflict with physical constraints, sensor data, or plant dynamics. This paper investigates hallucination as a failure of abductive reasoning, where missing premises, weak mechanistic support, or counter-evidence lead an LLM to propose incorrect causal narratives for faults such as pump restriction, valve stiction, fouling, or reactor runaway. We develop a neuro-symbolic framework in which Abductive Logic Programming (ALP) evaluates the coherence of model-generated explanations, counter-abduction generates rival hypotheses that test whether the explanation can be defeated, and Discourse-weighted ALP (D-ALP) incorporates nucleus–satellite structure from operator notes and alarm logs to weight competing explanations. Using our 500-scenario Process-Control Hallucination Dataset, we assess LLM reasoning across mechanistic, evidential, and contrastive dimensions. Results show that abductive and counter-abductive operators substantially reduce explanation-level hallucinations and improve alignment with physical process behavior, particularly in “easy-but-wrong’’ cases where a superficially attractive explanation contradicts historian trends or counter-evidence. These findings demonstrate that abductive reasoning provides a practical and verifiable foundation for improving LLM reliability in safety-critical process-control environments. Full article
Show Figures

Figure 1

28 pages, 1422 KB  
Article
Case in Taiwan Demonstrates How Corporate Demand Converts Payments for Ecosystem Services into Long-Run Incentives
by Tian-Yuh Lee and Wan-Yu Liu
Agriculture 2026, 16(2), 224; https://doi.org/10.3390/agriculture16020224 - 15 Jan 2026
Viewed by 619
Abstract
Payments for Ecosystem Services (PESs) have become a central instrument in global biodiversity finance, yet endangered species-specific PESs remain rare and poorly understood in implementation terms. Taiwan provides a revealing case: a three-year program paying farmers to conserve four threatened species—Prionailurus bengalensis [...] Read more.
Payments for Ecosystem Services (PESs) have become a central instrument in global biodiversity finance, yet endangered species-specific PESs remain rare and poorly understood in implementation terms. Taiwan provides a revealing case: a three-year program paying farmers to conserve four threatened species—Prionailurus bengalensis, Lutra lutra, Tyto longimembris, and Hydrophasianus chirurgus—in working farmland across Taiwan and Kinmen island. Through semi-structured interviews with farmers, residents, and local conservation actors, we examine how payments are interpreted, rationalized, enacted, and emotionally experienced at the ground level. This study adopts Colaizzi’s data analysis method, the primary advantage of which lies in its ability to systematically transform fragmented and emotive interview narratives into a logically structured essential description. This is achieved through the rigorous extraction of significant statements and the subsequent synthesis of thematic clusters. Participants reported willingness to continue not only because subsidies offset losses, but because rarity, community pride, and the visible arc of “we helped this creature survive” became internalized rewards. NGOs amplified this shift by translating science into farm practice and “normalizing” coexistence. In practice, conservation work became a social project—identifying threats, altering routines, and defending habitat as a shared civic act. This study does not estimate treatment-effect size; instead, it delivers mechanistic insight at a live policy moment, as Taiwan expands PESs and the OECD pushes incentive reform. The finding is simple and strategically important: endangered-species PESs work best where payments trigger meaning—not where payments replace it. Full article
(This article belongs to the Section Agricultural Economics, Policies and Rural Management)
Show Figures

Figure 1

37 pages, 5972 KB  
Article
An Ontology-Driven Framework for Road Technical Condition Assessment and Maintenance Decision-Making
by Rujie Zhang, Jianwei Wang and Haijiang Li
Appl. Sci. 2026, 16(2), 607; https://doi.org/10.3390/app16020607 - 7 Jan 2026
Viewed by 181
Abstract
Road technical condition assessment and maintenance decision-making rely heavily on technical standards whose clauses, computational formulas, and decision logic are often expressed in unstructured formats, leading to fragmented knowledge representation, isolated indicator calculation procedures, and limited interpretability of decision outcomes. To address these [...] Read more.
Road technical condition assessment and maintenance decision-making rely heavily on technical standards whose clauses, computational formulas, and decision logic are often expressed in unstructured formats, leading to fragmented knowledge representation, isolated indicator calculation procedures, and limited interpretability of decision outcomes. To address these challenges, a semantic framework with executable reasoning and computation components, Road Performance and Maintenance Ontology (RPMO), was developed, composed of a core ontology, an assessment ontology, and a maintenance ontology. The framework formalized clauses, computational formulas, and decision rules from standards and integrated semantic web rule language (SWRL) rules with external computational programs to automate distress identification and the computation and write-back of performance indicators. Validation through three use case scenarios conducted on eleven expressway asphalt pavement segments demonstrated that the framework produced distress severity inference, indicator computation, performance rating, and maintenance recommendations that were highly consistent with technical standards and expert judgment, with all reasoning results traceable to specific clauses and rule instances. This research established a methodological foundation for semantic transformation of road technical standards and automated execution of assessment and decision logic, enhancing the efficiency, transparency, and consistency of maintenance decision-making to support explicit, reliable, and knowledge-driven intelligent systems. Full article
(This article belongs to the Section Civil Engineering)
Show Figures

Figure 1

18 pages, 878 KB  
Article
Code Redteaming: Probing Ethical Sensitivity of LLMs Through Natural Language Embedded in Code
by Chanjun Park, Jeongho Yoon and Heuiseok Lim
Mathematics 2026, 14(1), 189; https://doi.org/10.3390/math14010189 - 4 Jan 2026
Viewed by 374
Abstract
Large language models are increasingly used in code generation and developer tools, yet their robustness to ethically problematic natural language embedded in source code is underexplored. In this work, we study content-safety vulnerabilities arising from ethically inappropriate language placed in non-functional code regions [...] Read more.
Large language models are increasingly used in code generation and developer tools, yet their robustness to ethically problematic natural language embedded in source code is underexplored. In this work, we study content-safety vulnerabilities arising from ethically inappropriate language placed in non-functional code regions (e.g., comments or identifiers), rather than traditional functional security vulnerabilities such as exploitable program logic. In real-world and educational settings, programmers may include inappropriate expressions in identifiers, comments, or print statements that are operationally inert but ethically concerning. We present Code Redteaming, an adversarial evaluation framework that probes models’ sensitivity to such linguistic content. Our benchmark spans Python and C and applies sentence-level and token-level perturbations across natural-language-bearing surfaces, evaluating 18 models from 1B to 70B parameters. Experiments reveal inconsistent scaling trends and substantial variance across injection types and surfaces, highlighting blind spots in current safety filters. These findings motivate input-sensitive safety evaluations and stronger defenses for code-focused LLM applications. Full article
Show Figures

Figure 1

19 pages, 6492 KB  
Article
Proportional Control with Pole-Placement-Tuned Gains for GPS-Based Waypoint Following, Experimentally Validated Against Classical Methods
by Heonjong Yoo and Wanyoung Chung
Sensors 2026, 26(1), 255; https://doi.org/10.3390/s26010255 - 31 Dec 2025
Viewed by 494
Abstract
The paper focuses on the goal point following an algorithm design based on the exact Global Positioning System (GPS) points. In order to achieve that, the first GPS point and initial heading angle are previously calculated by recursively adopting GPS points from the [...] Read more.
The paper focuses on the goal point following an algorithm design based on the exact Global Positioning System (GPS) points. In order to achieve that, the first GPS point and initial heading angle are previously calculated by recursively adopting GPS points from the Naver Application Programming Interface (API) map. The GPS points are designated as a goal point in order to follow the mobile platform to the generated path. Simulation and experimental results demonstrate that goal point following logic can be implemented based on the generated path achieved from the map. Furthermore, the goal-point-following method is extended to trajectory tracking by defining the vector rather than the designated goal point. The result is demonstrated through simulation and an experiment with the real mobile platform. Full article
(This article belongs to the Special Issue INS/GNSS Integrated Navigation Systems)
Show Figures

Figure 1

31 pages, 1641 KB  
Article
Transforming the Supply Chain Operations of Electric Vehicles’ Batteries Using an Optimization Approach
by Ghadeer Alsanie, Syeda Taj Unnisa and Nada Hamad Al Hamad
Sustainability 2026, 18(1), 367; https://doi.org/10.3390/su18010367 - 30 Dec 2025
Viewed by 461
Abstract
The increasing popularity of electric vehicles (EVs) as green alternatives to traditional internal combustion engine cars has highlighted the need for sustainable and environmentally friendly supply chain models. In particular, the handling of EV batteries, which are environmentally unfriendly and logistically critical due [...] Read more.
The increasing popularity of electric vehicles (EVs) as green alternatives to traditional internal combustion engine cars has highlighted the need for sustainable and environmentally friendly supply chain models. In particular, the handling of EV batteries, which are environmentally unfriendly and logistically critical due to their hazardous nature and short life cycle, requires a well-designed closed-loop supply chain (CLSC). This study proposes a new multi-objective optimization model of the CLSC, explicitly tailored to EV batteries under demand and return rate uncertainty. The proposed model incorporates three primary objectives that are typically in conflict with one another: minimizing the total cost, reducing carbon emissions throughout the entire supply chain network, and maximizing the recycling and reuse of batteries. The model employs a neutrosophic goal programming (NGP) methodology to address the uncertainties associated with demand and battery return quantities. The NGP model translates multiple objectives into non-monolithic goals with crisp aspiration levels (i.e., prescribed ideal levels for achieving the best of each goal) and thresholds that capture tolerances, thereby accounting for uncertainty. The efficiency of the proposed method is illustrated by a numerical example, solved using a IBM ILOG CPLEX Optimization Studio 22.1.2 solver. The findings demonstrate that the NGP can offer cost-effective, low-impact, and environmentally friendly solutions, thereby enhancing system robustness and flexibility to adapt to uncertainties. This study contributes to the emerging literature on sustainable operations research by developing a decision-making tool for EV-HV battery supply chain management. It also offers relevant suggestions for policymakers and industrialists who seek to co-optimize economic benefits, ecological sustainability, and logical feasibility in the emerging green society. Full article
Show Figures

Figure 1

Back to TopTop