Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (733)

Search Parameters:
Keywords = logical computational model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 736 KB  
Article
Reducing Energy Footprint of LLM Inference Through FPGA-Based Heterogeneous Computing Platforms
by Thiago Cormie Monteiro and Andrea Guerrieri
Electronics 2026, 15(5), 1052; https://doi.org/10.3390/electronics15051052 - 3 Mar 2026
Abstract
Artificial Intelligence (AI) has emerged as a transformative force, increasingly integrated into diverse aspects of modern society, from healthcare and education to business and entertainment. Among the most influential AI technologies are large language models (LLMs), such as generative pretrained transformers (GPTs). These [...] Read more.
Artificial Intelligence (AI) has emerged as a transformative force, increasingly integrated into diverse aspects of modern society, from healthcare and education to business and entertainment. Among the most influential AI technologies are large language models (LLMs), such as generative pretrained transformers (GPTs). These models are designed to process vast amounts of data and perform complex computations, enabling advanced capabilities in natural language understanding and generation. However, deployment and operation of such systems requires significant computational resources, leading to substantial energy consumption. While general-purpose hardware such as GPUs is limited by fixed-precision architectures, field-programmable gate arrays (FPGAs) offer the bit-level reconfigurability needed to exploit ultra-low-bitwidth representations. This allows power-intensive multiplications to be replaced by streamlined logic-based accumulations, maximizing the energy benefits of model quantization. This paper addresses the problem of the energy impact of LLMs by leveraging innovative FPGA-based heterogeneous computing platforms. Results demonstrate that ternary matrix multiplication (MatMul) achieves a 23% speedup and a remarkable 96% reduction in digital signal processor (DSP) utilization. Furthermore, the final optimized design shows a 52% reduction in total energy consumption compared to the baseline, making heterogeneous computing a compelling solution for power- and resource-constrained embedded applications. Full article
(This article belongs to the Special Issue New Trends for Power Optimizations in FPGA-Based Embedded Systems)
Show Figures

Figure 1

18 pages, 366 KB  
Article
Modeling the Nutrition–Academic Intention Gap: A Data-Driven Adaptive Gamified Architecture
by Nadia Pesantez-Jara, Nicolás Márquez and Cristian Vidal-Silva
Computers 2026, 15(3), 152; https://doi.org/10.3390/computers15030152 - 1 Mar 2026
Viewed by 131
Abstract
The integration of Internet of Things (IoT) and mobile computing in education offers new avenues to address complex health behaviors that affect cognitive performance. While traditional health education relies on passive information delivery, emerging research suggests that interactive systems can bridge the gap [...] Read more.
The integration of Internet of Things (IoT) and mobile computing in education offers new avenues to address complex health behaviors that affect cognitive performance. While traditional health education relies on passive information delivery, emerging research suggests that interactive systems can bridge the gap between intent and action. This study addresses the “double burden of malnutrition” in Ecuadorian schoolchildren (N = 120) as a Human-Computer Interaction (HCI) challenge. By utilizing a quantitative profiling approach rooted in the Social Dimensions of Health framework, we modeled the user requirements for a proposed intervention system. The findings identified a critical “Action Gap”: while 78.3% of users possess the motivation to improve habits for academic gain, 53.3% remain entrenched in high-sugar consumption patterns due to environmental latency. Statistical profiling reveals a significant dissonance (p<0.05) between cognitive intent and behavioral execution. Consequently, this paper presents the “Digital Bridge Architecture,” a computational framework that leverages these motivation metrics to design an Alternate Reality Game (ARG) logic. We conclude that conventional static applications may be limited in their capacity to support sustained behavioral change in this context. The proposed framework suggests that context-aware, gamified feedback mechanisms can offer a promising direction for aligning academic motivation with healthier behavioral outcomes. Full article
Show Figures

Figure 1

27 pages, 5910 KB  
Article
Hierarchical Fuzzy System Integrated with Deep Learning for Robust and Interpretable Classification of Breast Malignancies Using Radiomics Features from Ultrasound Imaging
by Mohamed Loey and Heba M. Khalil
Computers 2026, 15(3), 147; https://doi.org/10.3390/computers15030147 - 1 Mar 2026
Viewed by 114
Abstract
Breast cancer poses a global health risk and requires precision and accessibility in diagnostic measures. Ultrasound imaging is vital for breast lesion identification due to its safety, cost-effectiveness, and real-time capabilities. This paper presents a new fuzzy system architecture that utilizes ultrasound-based radiomics [...] Read more.
Breast cancer poses a global health risk and requires precision and accessibility in diagnostic measures. Ultrasound imaging is vital for breast lesion identification due to its safety, cost-effectiveness, and real-time capabilities. This paper presents a new fuzzy system architecture that utilizes ultrasound-based radiomics features to classify breast cancers. In order to ensure uniformity and consistency in shape-based characteristics limited to tumors, we calculate parameters such as elongation, compactness, spherical disproportion, and volumetrics following IBSI recommendations. We employ a hierarchical fuzzy system tree to handle high-dimensional data space and to identify the most discriminative characteristics. The selected features are incorporated into a modular fuzzy logic design that promotes transparency and maintains an auditable decision history according to clinical interpretability. Our framework enables the more accurate classification of breast cancer while addressing the beliefs and values prevalent in clinical applications. Tested on an independent set of data, the model achieved high accuracy of 99.60%, with low overfitting and strong generalization. To enhance its generalizability, we validated it on an internal dataset, attaining a sensitivity of 93.65%, a specificity of 99.24%, an AUC of 0.996, and an 18% reduction in unnecessary biopsies, as demonstrated through decision curve analysis, demonstrating substantial clinical utility across various settings. The findings confirm the system’s ability to identify intricate radiomic patterns linked to cancer. Due to its computing efficiency, it may be executed in real time during routine screening. The proposed radiomics-based fuzzy classification framework may offer a clinically beneficial approach for differentiating benign from malignant breast lesions. Explainability is enhanced with user-friendly artifacts for clinicians, including ranking IF-THEN rules and counterfactuals, all of which were validated in usability trials that demonstrated increased trust among radiologists compared to other technologies. Enhanced differentiation in the classification of various lesion types will decrease unnecessary biopsies. This approach integrates radiomics features with transparent and interpretable fuzzy logic to deliver enhanced predictors and a comprehensible framework for users, including physicians, to facilitate decision-making. This approach advances precision medicine standards through the early detection of lesions using more specific and systematic diagnostic instruments. Full article
Show Figures

Figure 1

20 pages, 1305 KB  
Article
The Stock Allocation Problem in a Production System with FIFO Picking Operations
by Luca Bertazzi and Felice Pedersoli
Logistics 2026, 10(3), 53; https://doi.org/10.3390/logistics10030053 - 1 Mar 2026
Viewed by 86
Abstract
Background: We study one of the most important problems in production and warehouse management: the problem of determining how to allocate the initial stock and the quantity produced to bins, and then how to manage picking operations from these bins. The objective [...] Read more.
Background: We study one of the most important problems in production and warehouse management: the problem of determining how to allocate the initial stock and the quantity produced to bins, and then how to manage picking operations from these bins. The objective is to minimize the total cost of the bins used. Methods: We formulate an integer linear programming model able to manage the two time periods related to assignment and picking together, and to handle the FIFO picking logic. We prove that it is NP-hard, and solve it to optimality. Then, we design a tailored heuristic algorithm, inspired by the current rule of thumb used by one of the main Italian mineral water bottling companies. Results: An extensive computational experiment allows us to show that this problem can be solved to optimality in a reasonable computational time based on real-world instances, and that the heuristic provides near-optimal solutions. Conclusions: Our approach provides a contribution to modeling and solving this problem when FIFO picking operations are taken into account. Moreover, it contributes by building important bridges between theoretical understanding and practical applications. Full article
Show Figures

Figure 1

35 pages, 1627 KB  
Review
Shedding Light on Explainable AI: Insights, Challenges, and the Future of Infrastructure Management
by Youwen Hu, Zunaira Atta, Tariq Ur Rahman, Shi Qiu, Jin Wang, Wei Wei, Zhiyu Liang and Qasim Zaheer
ISPRS Int. J. Geo-Inf. 2026, 15(3), 100; https://doi.org/10.3390/ijgi15030100 - 28 Feb 2026
Viewed by 184
Abstract
This study presents a systematic review of Explainable Artificial Intelligence (XAI) applications in Transportation Infrastructure Management (TIM), focusing on predictive maintenance of safety-critical assets such as railways and bridges. A predefined review protocol was implemented, and peer-reviewed literature was systematically retrieved from Web [...] Read more.
This study presents a systematic review of Explainable Artificial Intelligence (XAI) applications in Transportation Infrastructure Management (TIM), focusing on predictive maintenance of safety-critical assets such as railways and bridges. A predefined review protocol was implemented, and peer-reviewed literature was systematically retrieved from Web of Science and Scopus covering the period 2015 to March 2025. Using structured Boolean search logic and clearly defined inclusion and exclusion criteria—requiring explicit integration of explainability within AI-driven infrastructure maintenance—450 records were initially identified, screened in multiple stages, and refined to 163 eligible studies for detailed analysis. Through structured data extraction and thematic synthesis, the review develops a taxonomy of model-specific, model-agnostic, hybrid, and human-centered XAI approaches while identifying recurring challenges including heterogeneous multi-modal data environments, lack of standardized interpretability metrics, computational constraints in real-time deployment, limited robustness validation under field conditions, and unresolved performance–interpretability trade-offs. The findings demonstrate systematic growth in XAI-driven predictive maintenance research and highlight the need for domain-specific benchmarks, hybrid interpretable architectures, digital twin-assisted validation, and edge-enabled explainable systems to enable scalable, transparent, and regulation-ready infrastructure management aligned with Industry 5.0. Full article
Show Figures

Figure 1

20 pages, 1894 KB  
Article
A Whale Optimization-Based Dynamic Compression ATPG Algorithm for Computer Interlocking Equipment Testing
by Zhiyang Yu, Lanxuan Jiang, Tianze Wu and Xiaoming Chen
Appl. Sci. 2026, 16(5), 2361; https://doi.org/10.3390/app16052361 - 28 Feb 2026
Viewed by 114
Abstract
High-speed railway signaling equipment constitutes safety-critical infrastructure, wherein hardware failures may directly compromise operational safety. During the hardware prototyping and verification stage, structural testing is essential to detect latent faults in digital logic circuits and to ensure compliance with stringent safety integrity requirements. [...] Read more.
High-speed railway signaling equipment constitutes safety-critical infrastructure, wherein hardware failures may directly compromise operational safety. During the hardware prototyping and verification stage, structural testing is essential to detect latent faults in digital logic circuits and to ensure compliance with stringent safety integrity requirements. However, conventional test generation methods often suffer from long generation times and excessive test vector volume. To address these challenges, this study proposes a whale optimization-based dynamic compression Automatic Test-Pattern Generation (ATPG) algorithm. The proposed method integrates a discrete whale optimization algorithm (WOA) with a deterministic PODEM framework to dynamically compress generated test vectors. Additionally, a multi-path-sensitized PODEM enhanced with desensitization techniques is introduced to reduce backtracking and improve search efficiency. The proposed algorithm has been applied to the computer interlocking golden model netlist for testing purposes, achieving an impressive fault coverage rate of 100%. Test results from the ISCAS-85 standard circuit indicate that our approach significantly reduces both the length of the vector set and the time required for test generation when compared to traditional PODEMs without vector compression and pseudo-random combined PODEM vector generation methods. This advancement effectively enhances overall vector generation efficiency while maintaining comprehensive fault coverage. Full article
Show Figures

Figure 1

31 pages, 1339 KB  
Article
Quantum Secure Authentication and Key Exchange Protocol for UAV-Assisted VANETs
by Hyewon Park and Yohan Park
Mathematics 2026, 14(5), 820; https://doi.org/10.3390/math14050820 - 28 Feb 2026
Viewed by 62
Abstract
The integration of unmanned aerial vehicles (UAVs) into vehicular ad hoc networks (VANETs) has emerged as a promising solution to overcome the limited coverage of conventional roadside unit (RSU)-based infrastructures. However, UAVs operate in open environments and cannot be fully trusted, while the [...] Read more.
The integration of unmanned aerial vehicles (UAVs) into vehicular ad hoc networks (VANETs) has emerged as a promising solution to overcome the limited coverage of conventional roadside unit (RSU)-based infrastructures. However, UAVs operate in open environments and cannot be fully trusted, while the rapid advancement of quantum computing threatens the long-term security of classical public-key cryptographic systems. As a result, many existing UAV-based VANET authentication schemes face fundamental limitations in future deployments. Most existing schemes either lack post-quantum security or incur excessive computational and communication overhead, making them unsuitable for real-time and high-mobility vehicular environments. In addition, the common assumptions of trusted UAVs do not align with realistic threat models. To address these issues, this paper proposes a lightweight post-quantum authentication and key exchange protocol based on the module learning with errors (MLWE) problem and physically unclonable functions (PUFs). The proposed scheme treats UAVs as untrusted relay nodes and excludes them from session key generation. Its security is evaluated using informal analysis, the real-or-random (RoR) model, BAN logic, and AVISPA, while performance evaluation indicates improved efficiency compared to existing schemes. Full article
Show Figures

Figure 1

24 pages, 2244 KB  
Article
Machine Learning-Based Real-Time Detection and Mitigation of DoS Attacks in SDN-Based 5G Network
by Adila Chusnul Fatiyah, Adhyatma Abbas, Paul Elijah Setiasabda, Wen-Bin Hsieh, Jenq-Shiou Leu and Shiang-Jiun Chen
Electronics 2026, 15(5), 1005; https://doi.org/10.3390/electronics15051005 - 28 Feb 2026
Viewed by 67
Abstract
Multi-Access Edge Computing (MEC) is a fundamental component for 5G networks to overcome the latency limitations of traditional cloud computing. However, bringing resources closer to users exposes edge nodes to significant security threats, particularly volumetric Denial of Service (DoS) attacks. Current defenses often [...] Read more.
Multi-Access Edge Computing (MEC) is a fundamental component for 5G networks to overcome the latency limitations of traditional cloud computing. However, bringing resources closer to users exposes edge nodes to significant security threats, particularly volumetric Denial of Service (DoS) attacks. Current defenses often depend on static thresholds or computationally expensive deep learning, which can exhaust the limited resources of MEC nodes. To address these limitations, this paper proposes a resource-optimized edge-centric security management logic that integrates Software Defined Network (SDN) with lightweight supervised learning (C5.0, Bagging-CART, and Random Forest). Unlike standard system integrations, we introduce a dynamic non-permanent blocking algorithm designed to balance detection accuracy with control plane stability. Experimental results demonstrate that the proposed C5.0 model, operating at a specific 0.20% sFlow sampling point, achieves 100% detection accuracy with under 100 ms mitigation latency. The system successfully reduces volumetric attack loads from 445 Mbps to 95 Mbps (a 78% reduction) at the node level. These findings confirm that the proposed framework achieves higher computational efficiency than complex alternatives, making it a highly stable solution for constrained 5G MEC environments. Full article
Show Figures

Figure 1

32 pages, 13390 KB  
Article
Robotic Arm Control Using a Q-Learning Reinforcement Algorithm
by Afonso M. Timóteo, Ramiro S. Barbosa and Isabel S. Jesus
Robotics 2026, 15(3), 50; https://doi.org/10.3390/robotics15030050 - 27 Feb 2026
Viewed by 245
Abstract
This paper presents the design and implementation of an integrated robotic system capable of detecting objects through computer vision and making decisions based on logic strategies to perform physical tasks. For that, the system uses a robotic arm to play the Tic-Tac-Toe game [...] Read more.
This paper presents the design and implementation of an integrated robotic system capable of detecting objects through computer vision and making decisions based on logic strategies to perform physical tasks. For that, the system uses a robotic arm to play the Tic-Tac-Toe game utilizing a Q-learning algorithm to determine optimal moves. The system can be controlled using a graphical interface that enables real-time monitoring, facilitating seamless interaction between the user and the robotic arm. Three algorithms with different decision strategies were developed: a random decision algorithm, the MiniMax algorithm, and Q-learning, a reinforcement-learning algorithm. The results obtained highlight the control of the robotic arm using kinematic equations, the training of a robust YOLOv5 model, and the effective learning capability of a Q-learning algorithm. The proposed system presents practical implementation of the robotic system which can be used as a basis for further projects and for teaching robotics. Full article
(This article belongs to the Section Sensors and Control in Robotics)
Show Figures

Figure 1

18 pages, 2081 KB  
Article
Lyapunov-Based Hybrid Model Predictive Control for Asymmetric Damping-Driven Vehicle Height and Posture Adjustment
by Ao Chen and Jialing Yao
Electronics 2026, 15(5), 986; https://doi.org/10.3390/electronics15050986 (registering DOI) - 27 Feb 2026
Viewed by 103
Abstract
A Lyapunov-based hybrid model predictive control (LHMPC) method is proposed for the control of a vehicle hybrid logic dynamic system (MLD) that regulates vehicle height through asymmetric damping forces. This method addresses the limitations of traditional hybrid model predictive control (HMPC), including its [...] Read more.
A Lyapunov-based hybrid model predictive control (LHMPC) method is proposed for the control of a vehicle hybrid logic dynamic system (MLD) that regulates vehicle height through asymmetric damping forces. This method addresses the limitations of traditional hybrid model predictive control (HMPC), including its inability to guarantee closed-loop stability, long prediction horizons, and excessive computational burden. The method incorporates the decreasing condition of the Lyapunov function as a contraction constraint mechanism, ensuring asymptotic stability throughout the control process. Additionally, by following the terminal constraint principle, the Lyapunov function is introduced as an inequality constraint set, replacing the terminal equality constraints typically used in traditional stability frameworks. This further guarantees the recursive feasibility and closed-loop stability of the MLD system optimization. Simulation results based on a seven-degree-of-freedom vehicle model demonstrate that the proposed LHMPC significantly outperforms conventional HMPC in terms of height tracking accuracy, convergence rate, vibration suppression, and real-time controller performance. Furthermore, the method can effectively harness the vehicle body’s vibrational energy while achieving coordinated control of vehicle height and posture, thereby reducing energy consumption during the height adjustment process. Full article
Show Figures

Figure 1

18 pages, 1079 KB  
Article
Feasibility of Using Large Language Models for Structured Medication Extraction from Clinical Text: A Comparative Analysis of Zero-Shot and Few-Shot Paradigms
by Evan Schulte, Mohamed Abusharkh, Kushal Dahal, Michael Klepser and Minji Sohn
Appl. Sci. 2026, 16(5), 2300; https://doi.org/10.3390/app16052300 - 27 Feb 2026
Viewed by 132
Abstract
The digitization of healthcare has been accompanied by a rapid expansion of electronic health records (EHRs); however, a significant proportion of critical patient data, specifically medication regimens, remains entrapped within unstructured clinical narratives. The inability to seamlessly compute this data hinders advancements in [...] Read more.
The digitization of healthcare has been accompanied by a rapid expansion of electronic health records (EHRs); however, a significant proportion of critical patient data, specifically medication regimens, remains entrapped within unstructured clinical narratives. The inability to seamlessly compute this data hinders advancements in pharmacovigilance, clinical decision support, and population health management. This study presents a comprehensive, rigorous evaluation of the feasibility of deploying Large Language Models (LLMs) to automate the extraction of structured dosage information (Dose, Daily Frequency, Duration) from outpatient antimicrobial clinical notes sourced from the Collaboration to Harmonize Antimicrobial Registry Measures (CHARM) registry. We scrutinized the performance of five distinct open-weight architectures, namely GPT-OSS:20B, Gemma 2:9B, Mistral 7B, Qwen3:14B and Llama 3.2, across both Zero-Shot and Retrieval Augmented Generation (RAG)-based Few-Shot prompting paradigms. Our analysis reveals a fundamental architectural trade-off: the reasoning-optimized GPT-OSS:20B dominates the zero-shot landscape (F1 > 0.90) by leveraging abstract schema understanding, whereas the instruction-tuned Gemma 2:9B excels in the few-shot setting (F1 ~ 0.99), effectively utilizing examples as guardrails to surpass larger models. Conversely, smaller models (Mistral, Llama) exhibit a prohibitive “hallucination barrier,” rendering them unsafe for unsupervised clinical application. Furthermore, we identify “Inconsistent Unit Handling” and “Complex Temporal Logic” as persistent failure modes that resist simple scaling laws. This report provides a definitive framework for selecting model architectures based on the availability of few-shot examples and highlights the necessity of dynamic RAG strategies to achieve production-grade reliability in medical informatics. Full article
Show Figures

Figure 1

17 pages, 2253 KB  
Article
A New Hydrogen Filling Method Based on the Analytical Solutions of Final Filling Time and Hydrogen Temperature
by Shanshan Deng, Hao Luo, Chenglong Li, Xianhuan Wu, Xu Wang, Tianqi Yang and Jinsheng Xiao
Energies 2026, 19(5), 1177; https://doi.org/10.3390/en19051177 - 26 Feb 2026
Viewed by 161
Abstract
To fill hydrogen fuel cell vehicles quickly and safely, the SAE J2601 protocol has published the MC method, which includes control of the filling speed and pressure target. The filling speed depends on the final filling time, the formula for which is obtained [...] Read more.
To fill hydrogen fuel cell vehicles quickly and safely, the SAE J2601 protocol has published the MC method, which includes control of the filling speed and pressure target. The filling speed depends on the final filling time, the formula for which is obtained by fitting simulated data. The pressure target depends on the final hydrogen temperature, whose analytical solution is derived from a thermodynamic tank model. This article derives new analytical solutions of the final filling time and hydrogen temperature based on an established lumped-parameter model of the storage tank. Based on the original MC method’s control logic, a new filling method that directly uses the analytical solutions of the final filling time and hydrogen temperature was proposed. The simulation results of the new filling method and the validated model (zone-dimensional gas and a one-dimensional tank wall, 0D1D) are compared. Under the ambient temperature conditions of the 0–20 °C and precooling temperature conditions of −20–0 °C set in this article, results show that the new filling method achieves maximum errors of 4.3 °C in its final hydrogen temperature and 0.9% in a state of charge (SOC) compared to the 0D1D model. Parameter sensitivity analysis reveals that initial pressure has the most significant impact on computational accuracy, followed by ambient and precooling temperatures. Future work may further improve prediction accuracy by incorporating correction factors for initial pressure and ambient temperature. Moreover, since the analytical solution of the final hydrogen temperature inherently includes the precooling temperature parameter, the new filling method can automatically adapt to precooling temperature variations. Full article
(This article belongs to the Special Issue Advances in New Mobility for Electric Vehicles)
Show Figures

Figure 1

29 pages, 10558 KB  
Article
AI-Powered Interpretation of Traditional Village Landscape Language: An Analysis of Xinye Village in Zhejiang, China
by Yanying Liang, Tao Chen and Zizhen Hong
Sustainability 2026, 18(5), 2183; https://doi.org/10.3390/su18052183 - 24 Feb 2026
Viewed by 134
Abstract
Amidst rapid urbanization and modernization, numerous traditional villages in China face severe challenges, including landscape homogenization and the erosion of their distinctive characteristics. Addressing this issue requires a method capable of systematically identifying, analyzing, and reconstructing both the landscape and its underlying cultural [...] Read more.
Amidst rapid urbanization and modernization, numerous traditional villages in China face severe challenges, including landscape homogenization and the erosion of their distinctive characteristics. Addressing this issue requires a method capable of systematically identifying, analyzing, and reconstructing both the landscape and its underlying cultural features. This study proposes a digital analytical approach that integrates multimodal artificial intelligence with landscape language theory to address the homogenization of cultural landscapes in traditional Chinese villages. Taking Xinye Village in Zhejiang Province as a case study, the research systematically decodes its landscape spatial narratives and underlying cultural genes. This framework systematically deconstructs village landscapes across four levels: “vocabulary, context, grammar, and semantics”. The village image database is first automatically recognized and statistically analyzed by computer vision technology, which extracts 31 core landscape vocabulary items from three main categories and nine subcategories. Second, Retrieval-augmented Generation technology is employed to synthesize from the constructed domain-specific corpus, a natural context structured around Yuhua Mountain and Daofeng Mountain, as well as a cultural context based on ancestral hall order, connected through folk activities, and idealized by farming and reading passed down through generations. Building on this framework, a multimodal model was used to examine the spatial composition and combinatorial laws of landscape features. Six essential dimensions—spatial layout, visual order, element combination, functional relationships, circulation layout, and scale correlations—revealed the spatial grammar of shuikou landscape. Lastly, the semantic values conveyed by the landscape vocabulary were thoroughly analyzed across three dimensions—form, function, and culture—by integrating a knowledge base. This work creates a landscape language atlas of Xinye Village by combining these studies and using a linguistic model of “character-word-sentence-paragraph”. By methodically deciphering the clan’s cultural code of “farming and reading passed down through generations”, this clearly reconstructs the spatial narrative logic from micro-elements to macro-patterns. This research not only advances the study of landscape language in traditional villages from qualitative description toward a systematic, digital, and interpretable paradigm but also provides an operational theoretical and methodological foundation for the in-depth interpretation, conservation, and transmission of traditional village cultural landscapes. Full article
Show Figures

Figure 1

48 pages, 3619 KB  
Article
Comparative Assessment of the Reliability of Non-Recoverable Subsystems of Mining Electronic Equipment Using Various Computational Methods
by Nikita V. Martyushev, Boris V. Malozyomov, Anton Y. Demin, Alexander V. Pogrebnoy, Georgy E. Kurdyumov, Viktor V. Kondratiev and Antonina I. Karlina
Mathematics 2026, 14(4), 723; https://doi.org/10.3390/math14040723 - 19 Feb 2026
Viewed by 218
Abstract
The assessment of reliability in non-repairable subsystems of mining electronic equipment represents a computationally challenging problem, particularly for complex and highly connected structures. This study presents a systematic comparative analysis of several deterministic approaches for reliability estimation, focusing on their computational efficiency, accuracy, [...] Read more.
The assessment of reliability in non-repairable subsystems of mining electronic equipment represents a computationally challenging problem, particularly for complex and highly connected structures. This study presents a systematic comparative analysis of several deterministic approaches for reliability estimation, focusing on their computational efficiency, accuracy, and applicability. The investigated methods include classical boundary techniques (minimal paths and cuts), analytical decomposition based on the Bayes theorem, the logic–probabilistic method (LPM) employing triangle–star transformations, and the algorithmic Structure Convolution Method (SCM), which is based on matrix reduction of the system’s connectivity graph. The reliability problem is formally represented using graph theory, where each element is modeled as a binary variable with independent failures, which is a standard and practically justified assumption for power electronic subsystems operating without common-cause coupling. Numerical experiments were carried out on canonical benchmark topologies—bridge, tree, grid, and random connected graphs—representing different levels of structural complexity. The results demonstrate that the SCM achieves exact reliability values with up to six orders of magnitude acceleration compared to the LPM for systems containing more than 20 elements, while maintaining polynomial computational complexity. Qualitatively, the compared approaches differ in the nature of the output and practical applicability: boundary methods provide fast interval estimates suitable for preliminary screening, whereas decomposition may exhibit a systematic bias for highly connected (non-series–parallel) topologies. In contrast, the SCM consistently preserves exactness while remaining computationally tractable for medium and large sparse-to-moderately dense graphs, making it preferable for repeated recalculations in design and optimization workflows. The methods were implemented in Python 3.7 using NumPy and NetworkX, ensuring transparency and reproducibility. The findings confirm that the SCM is an efficient, scalable, and mathematically rigorous tool for reliability assessment and structural optimization of large-scale non-repairable systems. The presented methodology provides practical guidelines for selecting appropriate reliability evaluation techniques based on system complexity and computational resource constraints. Full article
Show Figures

Figure 1

22 pages, 1159 KB  
Review
Investigation of the Control Strategies for Enhancing the Efficiency of Natural Gas Separation and Purification Processes
by Alexander Vitalevich Martirosyan and Daniil Vasilievich Romashin
Processes 2026, 14(4), 700; https://doi.org/10.3390/pr14040700 - 19 Feb 2026
Viewed by 361
Abstract
Natural gas separation and purification are critical stages for ensuring product quality, operational safety, and economic efficiency in the energy sector. However, a significant research gap exists: conventional control systems, predominantly based on a proportional-integral-derivative (PID) controller, are often static and lack the [...] Read more.
Natural gas separation and purification are critical stages for ensuring product quality, operational safety, and economic efficiency in the energy sector. However, a significant research gap exists: conventional control systems, predominantly based on a proportional-integral-derivative (PID) controller, are often static and lack the adaptability required to handle fluctuations in raw gas composition and operating conditions. This review aims to systematically analyze modern control strategies to identify the most influential parameters and effective methodologies for enhancing process efficiency. The methods involve a comparative assessment of classical PID control against advanced intelligent approaches, including adaptive control, fuzzy logic, and machine learning (ML) models, based on a synthesis of the recent literature and industrial case studies. The key finding is that data-driven and intelligent methods (e.g., neural networks, adaptive fuzzy controllers) demonstrate superior performance in achieving precise parameter adjustment, improving responsiveness, and optimizing energy consumption compared to traditional static systems. Such an integrated strategy transforms decision-making into a multivariable optimization framework with objectives encompassing minimizing pollutants, lowering energy usage, and enhancing end-product specifications. The present work argues for employing methodologies like systemic analyses and advanced computational techniques—particularly artificial neural networks—to forecast gas stream attributes. Full article
(This article belongs to the Section Process Control and Monitoring)
Show Figures

Figure 1

Back to TopTop