Next Issue
Volume 9, March
Previous Issue
Volume 9, January
 
 

Appl. Syst. Innov., Volume 9, Issue 2 (February 2026) – 21 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Cover Story (view full-size image):
Order results
Result details
Section
Select all
Export citation of selected articles as:
17 pages, 4014 KB  
Article
Multi-Class Leak Detection in Water Pipelines Using a Wavelet-Guided Frequency-Informed Transformer
by Mohammed Essouabni, Jamal El Mhamdi and Abdelilah Jilbab
Appl. Syst. Innov. 2026, 9(2), 47; https://doi.org/10.3390/asi9020047 - 23 Feb 2026
Viewed by 685
Abstract
Water utilities continue to lose a lot of Non-Revenue Water (NRW) because of leaks that go undetected. This makes it necessary to find accurate but easy-to-use monitoring solutions. This paper presents FiT-WST+, a wavelet-guided Frequency-Informed Transformer (FiT) designed for the classification of five [...] Read more.
Water utilities continue to lose a lot of Non-Revenue Water (NRW) because of leaks that go undetected. This makes it necessary to find accurate but easy-to-use monitoring solutions. This paper presents FiT-WST+, a wavelet-guided Frequency-Informed Transformer (FiT) designed for the classification of five distinct leak types utilising accelerometer measurements. The proposed architecture combines the spectral modelling ability of a FIT with the stable translation-invariant representation of the Wavelet Scattering Transform (WST). The model uses a guided attention mechanism to combine spectral and scattering cues that work well together to make classes more distinct, especially for fault types that are similar. On the held-out test set, FiT-WST+ achieves 99.6% accuracy, 99.6% balanced accuracy, and a 99.6% macro-averaged F1-score. Comparative benchmarking against recent methods tested on the same dataset shows that this method works at a low sampling rate (1 kHz), which greatly lowers bandwidth needs and allows for scalable deployment on edge devices with limited resources for real-time monitoring of important water infrastructure. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

10 pages, 831 KB  
Article
Smart Farming Innovation: Automated Biomechanical Monitoring of Broilers Using a Hybrid YOLO-SAM Pipeline
by Victória Fernanda Dionizio, Marcelo Tsuguio Okano and Irenilza de Alencar Nääs
Appl. Syst. Innov. 2026, 9(2), 46; https://doi.org/10.3390/asi9020046 - 20 Feb 2026
Viewed by 665
Abstract
Precision Livestock Farming (PLF) relies on accurate, high-frequency data to optimize production efficiency. Traditional assessments of feeding behavior remain manual and invasive, lacking the kinematic resolution required for automated control systems. This study developed and validated a novel computer vision framework integrating YOLOv8 [...] Read more.
Precision Livestock Farming (PLF) relies on accurate, high-frequency data to optimize production efficiency. Traditional assessments of feeding behavior remain manual and invasive, lacking the kinematic resolution required for automated control systems. This study developed and validated a novel computer vision framework integrating YOLOv8 and the Segment Anything Model (SAM) to address this gap. The objective was to engineer a non-invasive, automated pipeline to quantify high-speed broiler biomechanics in real time. The system was validated using video data from broilers across three growth stages and varying feed granulometries (fine mash, coarse mash, and pellets) to test its robustness in detecting subtle kinematic variations. The hybrid YOLO-SAM pipeline achieved high performance, with a precision of 0.95 and a recall of 0.91, confirming its reliability as a scalable sensor for smart farming platforms. Biomechanical analysis demonstrated the system’s sensitivity, showing that larger feed particles induce greater beak gape and displacement while significantly improving ingestion efficiency (0.6 effort ratio for pellets vs. 3.0 for mash). This research provides a validated technical foundation for digital phenotyping in poultry, offering a hands-free, quantitative tool that supports data-driven decision-making in feed formulation and production management. Full article
Show Figures

Figure 1

13 pages, 1208 KB  
Article
TRM-ViT: A Tiny Recursive Vision Transformer for Efficient Melanoma Detection
by My Abdelouahed Sabri, Ali Belkhiri, Abla Rahmouni and Abdellah Aarab
Appl. Syst. Innov. 2026, 9(2), 45; https://doi.org/10.3390/asi9020045 - 19 Feb 2026
Viewed by 704
Abstract
Melanoma remains one of the most aggressive forms of skin cancer, and its early detection is critical for improving patient survival. Vision Transformers (ViTs) have recently shown strong performance in dermoscopic image analysis; however, their effectiveness often relies on stacking multiple transformer encoder [...] Read more.
Melanoma remains one of the most aggressive forms of skin cancer, and its early detection is critical for improving patient survival. Vision Transformers (ViTs) have recently shown strong performance in dermoscopic image analysis; however, their effectiveness often relies on stacking multiple transformer encoder blocks, resulting in large numbers of trainable parameters and increased model complexity. In this study, we propose TRM-ViT, a parameter-efficient recursive Vision Transformer designed for binary melanoma classification. Instead of using multiple independent encoder blocks, TRM-ViT applies a single transformer encoder block recursively with shared weights, enabling effective depth while substantially reducing the number of trainable parameters. Experiments conducted on the HAM10000 dataset demonstrate that TRM-ViT achieves a ROC–AUC of 0.7952, comparable to a standard Vision Transformer (0.7951), while using approximately seven times fewer parameters (2.15 M vs. 14.57 M). Notably, the proposed model maintains high melanoma sensitivity, making it particularly suitable for screening-oriented applications. These results indicate that recursive weight sharing can provide an effective trade-off between diagnostic performance and model compactness, supporting the development of efficient decision-support tools for melanoma screening in resource-constrained environments. Full article
Show Figures

Figure 1

22 pages, 4357 KB  
Article
Pipeline Curvature Detection Using a Pipeline Inspection Gauge Equipped with Multiple Odometry
by Eloina Lugo-del-Real, Jorge A. Soto-Cajiga, Antonio Ramirez-Martinez, Edmundo Guerra Paradas and Antoni Grau
Appl. Syst. Innov. 2026, 9(2), 44; https://doi.org/10.3390/asi9020044 - 19 Feb 2026
Viewed by 1708
Abstract
Pipeline integrity is crucial for ensuring the safe and efficient transportation of hydrocarbons. One of the essential methods for maintaining pipeline integrity is periodic inspection using Pipeline Inspection Gauges (PIGs). These PIGs traverse extensive pipeline networks, collecting critical data related to inertial navigation [...] Read more.
Pipeline integrity is crucial for ensuring the safe and efficient transportation of hydrocarbons. One of the essential methods for maintaining pipeline integrity is periodic inspection using Pipeline Inspection Gauges (PIGs). These PIGs traverse extensive pipeline networks, collecting critical data related to inertial navigation and inspection technologies, such as geometric, ultrasonic, or magnetic flux inspection. Following an inspection, data is downloaded for post-processing to identify and accurately locate pipeline anomalies. Accurate positioning of indications is crucial for effective repair or maintenance of the identified pipeline section. Thus, ongoing efforts aim to improve the precision of indication positioning. This study introduces an innovative method and model for deriving pipeline trajectory characteristics to enhance positioning accuracy. The method is based on distance sampling of odometers, improving the PIG displacement measurement by implementing multiple odometries. Using the method described in this work can compensate for odometer slip, since the distance measurement error was reduced from 15.67% to 1.38%. The model simulates (three and four) odometer trajectories in curvature and calculates the curvature along the pipeline based on odometer data. The curvature model is evaluated with real data obtained from a test circuit, demonstrating that the proposed method and model technique can yield trajectory characteristics such as curvature detection; we can differentiate linear sections from bend sections in the test circuit. However, the curvature measurement error remains considerable due to odometer slippage. Therefore, future work proposes using additional odometers to improve measurement accuracy. Full article
Show Figures

Figure 1

17 pages, 1014 KB  
Article
A Multi-Domain Collaborative Framework for Practical Application of Causal Knowledge Discovery from Public Data in Elite Sports
by Dandan Cui, Zili Jiang, Xiangning Zhang, Wenchao Yang and Zihong He
Appl. Syst. Innov. 2026, 9(2), 43; https://doi.org/10.3390/asi9020043 - 14 Feb 2026
Viewed by 802
Abstract
In elite sports, discovering interdisciplinary causal relationships from public data is critical for gaining a competitive edge. However, the causal knowledge required for these practices is difficult to obtain through either existing intervention-based sports science methods or computational techniques focused on statistical association. [...] Read more.
In elite sports, discovering interdisciplinary causal relationships from public data is critical for gaining a competitive edge. However, the causal knowledge required for these practices is difficult to obtain through either existing intervention-based sports science methods or computational techniques focused on statistical association. This paper formalizes a multi-domain collaborative framework, which involves three roles: (1) the elite sports team; (2) the sport science expert; and (3) the causal inference expert. Our nine-step workflow, which processes three core elements of problem, data, and computing, guides these experts through a cycle that systematically transforms practical problems into computational models and, crucially, translates complex analytical outputs back into actionable strategies. The framework also introduces a dual-dimensional “field evaluation” method, encompassing both process and outcome, to quantify the trustworthiness of knowledge in practical settings where a “gold standard” is absent. This framework was applied in an illustrative case study prior to the Paris 2024 Olympics, providing one additional evidence-informed input for the national team. The success was observed and interpreted as contextual consistency rather than causal validation. This framework ensures the practical application of causal discovery in elite sports, offering a repeatable and explainable pathway for generating credible, evidence-based insights from public data for elite sports decision-making. Full article
(This article belongs to the Special Issue Recent Developments in Data Science and Knowledge Discovery)
Show Figures

Figure 1

28 pages, 14898 KB  
Article
Deep Learning for Classification of Internal Defects in Fused Filament Fabrication Using Optical Coherence Tomography
by Valentin Lang, Qichen Zhu, Malgorzata Kopycinska-Müller and Steffen Ihlenfeldt
Appl. Syst. Innov. 2026, 9(2), 42; https://doi.org/10.3390/asi9020042 - 14 Feb 2026
Viewed by 746
Abstract
Additive manufacturing is increasingly adopted for the industrial production of small series of functional components, particularly in thermoplastic strand extrusion processes such as Fused Filament Fabrication. This transition relies on technological advances addressing key process limitations, including dimensional instability, weak interlayer bonding, extrusion [...] Read more.
Additive manufacturing is increasingly adopted for the industrial production of small series of functional components, particularly in thermoplastic strand extrusion processes such as Fused Filament Fabrication. This transition relies on technological advances addressing key process limitations, including dimensional instability, weak interlayer bonding, extrusion defects, moisture sensitivity, and insufficient melting. Process monitoring therefore focuses on early defect detection to minimize failed builds and costs, while ultimately enabling process optimization and adaptive control to mitigate defects during fabrication. For this purpose, a data processing pipeline for monitoring Optical Coherence Tomography images acquired in Fused Filament Fabrication is introduced. Convolutional neural networks are used for the automatic classification of tomographic cross-sections. A dataset of tomographic images passes semi-automatic labeling, preprocessing, model training and evaluation. A sliding window detects outlier regions in the tomographic cross-sections, while masks suppress peripheral noise, enabling label generation based on outlier ratios. Data are split into training, validation, and test sets using block-based partitioning to limit leakage. The classification model employs a ResNet-V2 architecture with BottleneckV2 modules. Hyperparameters are optimized, with N = 2, K = 2, dropout 0.5, and learning rate 0.001 yielding best performance. The model achieves 0.9446 accuracy and outperforms EfficientNet-B0 and VGG16 in accuracy and efficiency. Full article
(This article belongs to the Special Issue AI-Driven Decision Support for Systemic Innovation)
Show Figures

Figure 1

17 pages, 27367 KB  
Article
3D Finite Element Models of Zigzag Grounding Transformer for Zero-Sequence Impedance Calculation
by Juan C. Olivares-Galvan, Manuel A. Corona-Sánchez, Rodrigo Ocon-Valdez, Jose L. Hernandez-Avila, Rafael Escarela-Perez and David A. Aragon-Verduzco
Appl. Syst. Innov. 2026, 9(2), 41; https://doi.org/10.3390/asi9020041 - 13 Feb 2026
Viewed by 1257
Abstract
Accurate prediction of the zero-sequence impedance (Z0) of three-legged zigzag grounding transformers is essential for ground-fault protection and power-quality performance, yet manufacturer analytical estimations often have limited accuracy. This paper investigates how accurately Z0 can be predicted using 3D [...] Read more.
Accurate prediction of the zero-sequence impedance (Z0) of three-legged zigzag grounding transformers is essential for ground-fault protection and power-quality performance, yet manufacturer analytical estimations often have limited accuracy. This paper investigates how accurately Z0 can be predicted using 3D finite element method (FEM) models based on the stored magnetic energy approach and how modeling the metallic tank and nonlinear core B–H behavior affects Z0 relative to analytical calculations and laboratory measurements. Two 3D FEM models are developed for a three-legged zigzag grounding transformer, incorporating the nonlinear core characteristic; impedance boundary conditions are used to efficiently account for tank-induced currents while reducing computational cost. The FEM results are compared with laboratory tests and with the analytical method used by manufacturers. The proposed models achieve errors below 4% with respect to the nominal Z0 and outperform the analytical approach. The contributions are a validated 3D FEM methodology that resolves zero-sequence flux paths under fault conditions and a practical modeling tool that improves grounding transformer design and ground-fault protection settings in modern power systems. Full article
(This article belongs to the Section Industrial and Manufacturing Engineering)
Show Figures

Figure 1

39 pages, 7239 KB  
Article
A Novel Hybrid Neural Network with Optimized Feature Selection for Spindle Thermal Error Prediction
by Lifeng Yin, Chenglong Li, Yaohan Peng, Hao Tang, Ningruo Wang and Huayue Chen
Appl. Syst. Innov. 2026, 9(2), 40; https://doi.org/10.3390/asi9020040 - 5 Feb 2026
Viewed by 783
Abstract
In modern intelligent manufacturing, spindle thermal errors are critical to machining accuracy. To address this, we propose a two-stage prediction framework. First, for feature selection, an enhanced Red-Billed Magpie Optimization algorithm (RBMO-X) optimizes the parameters of a hybrid convolutional neural network (DLTK). Concurrently, [...] Read more.
In modern intelligent manufacturing, spindle thermal errors are critical to machining accuracy. To address this, we propose a two-stage prediction framework. First, for feature selection, an enhanced Red-Billed Magpie Optimization algorithm (RBMO-X) optimizes the parameters of a hybrid convolutional neural network (DLTK). Concurrently, PSO-optimized HDBSCAN clustering combined with Pearson correlation selects optimal temperature-sensitive points. The DLTK network integrates LSTM, deformable convolution, Transformer, and Fourier KAN modules for robust spatiotemporal feature extraction. The experimental results demonstrate significant improvements. The proposed feature selection method improves the Silhouette index by 32.39% and increases BWP by 49.16%. Using the selected points reduces prediction RMSE by 31.89% compared to random selection. The final RBMO-X-DLTK model achieves an RMSE of 0.181 μm, an MAE of 0.128 μm, and an R2 score of 0.9978, outperforming seven benchmark models (e.g., BP, LSTM, CNN-LSTM). In practical validation, the model enabled an average thermal error reduction of 89%. This integrated approach provides a robust and accurate solution for spindle thermal error prediction, demonstrating strong generalization capability. Full article
Show Figures

Figure 1

21 pages, 797 KB  
Article
Dynamic Logarithmic Quantized Stabilization of Switched Systems Subject to Denial-of-Service Attacks
by Yunhui Gu, Jingjing Yan and Yunliang Ma
Appl. Syst. Innov. 2026, 9(2), 39; https://doi.org/10.3390/asi9020039 - 3 Feb 2026
Viewed by 459
Abstract
The problem of the dynamic quantization stabilization of the networked switched systems affected by denial-of-service (DoS) attacks is investigated. Firstly, a quasi-periodic logarithmic quantization strategy is proposed, which ensures the quantization accuracy of the quantizer under the premise of limited quantization levels. Secondly, [...] Read more.
The problem of the dynamic quantization stabilization of the networked switched systems affected by denial-of-service (DoS) attacks is investigated. Firstly, a quasi-periodic logarithmic quantization strategy is proposed, which ensures the quantization accuracy of the quantizer under the premise of limited quantization levels. Secondly, the adjustment time and the update period of the quantizer are designed to avoid the saturation of the quantizer under DoS attacks. Subsequently, a quantized feedback controller is designed for the switched system under the influence of DoS attacks, and the sufficient conditions are obtained to ensure the global asymptotic stability of the closed-loop system. Finally, the effectiveness of the theoretical analysis is verified through a dual-tank system. Full article
(This article belongs to the Section Control and Systems Engineering)
Show Figures

Figure 1

19 pages, 4373 KB  
Article
Exploring Problem-Solving Strategies in Gifted and Regular Students: Education Insights from Eye-Tracking Analysis
by Po-Lei Lee, Shih-Ting Hung, Pao-Hsin Chang, Chun-Yen Chang, Lei Bao, Ting-Kuang Yeh and Li-Ching Lee
Appl. Syst. Innov. 2026, 9(2), 38; https://doi.org/10.3390/asi9020038 - 1 Feb 2026
Cited by 1 | Viewed by 840
Abstract
This study investigated how gifted and regular high school students employ different cognitive strategies and integrate information during scientific problem solving, using eye-tracking techniques. Eighteen multiple-choice items were selected from the Investigating Scientific Thinking and Reasoning (iSTAR) assessment developed at The Ohio State [...] Read more.
This study investigated how gifted and regular high school students employ different cognitive strategies and integrate information during scientific problem solving, using eye-tracking techniques. Eighteen multiple-choice items were selected from the Investigating Scientific Thinking and Reasoning (iSTAR) assessment developed at The Ohio State University, including nine text-only questions (tMCQs) and nine picture-embedded questions (pMCQs). The items were chosen to ensure clear spatial separation among text, image, and answer areas, allowing reliable region-based eye-movement analysis. Eye-tracking data were analyzed using two indices: fixation time ratio (FTR), reflecting relative attention allocation, and saccade count ratio (SCR), capturing cross-region information integration. The results revealed clear group differences. Gifted students devoted a larger proportion of attention to pictorial information (0.38 vs. 0.32) and showed more frequent transitions between picture and answer regions (0.15 vs. 0.12), indicating more integrative processing and mental model construction. In contrast, regular students spent more time focusing on textual regions and exhibited higher within-text saccade activity, consistent with a direct translation strategy. Furthermore, SCR-based machine learning classification using a Random Forest model demonstrated meaningful discriminative capability between the two groups, particularly for picture-embedded questions, achieving an accuracy of 77.5%. Overall, the findings provide empirical evidence that question format influences students’ cognitive strategies during scientific reasoning. Methodologically, this study combines a validated reasoning assessment, a carefully defined ROI-based eye-tracking design, and interpretable behavioral indicators, offering practical implications for differentiated science instruction. Full article
Show Figures

Figure 1

40 pages, 14070 KB  
Article
Remote Laboratory Based on FPGA Devices Using the E-Learning Approach
by Victor H. García Ortega, Josefina Bárcenas López and Enrique Ruiz-Velasco Sánchez
Appl. Syst. Innov. 2026, 9(2), 37; https://doi.org/10.3390/asi9020037 - 31 Jan 2026
Viewed by 1236
Abstract
Laboratories across educational levels have traditionally required in-person attendance, limiting practical activities to specific times and physical spaces. This paper presents a technological architecture based on a system-on-chip (SoC) and a connectivist model, grounded in Connectivism Learning Theory, for implementing a remote laboratory [...] Read more.
Laboratories across educational levels have traditionally required in-person attendance, limiting practical activities to specific times and physical spaces. This paper presents a technological architecture based on a system-on-chip (SoC) and a connectivist model, grounded in Connectivism Learning Theory, for implementing a remote laboratory in digital logic design using FPGA devices. The architecture leverages an Internet-of-Things (IoT) environment to provide applications and servers that enable remote access, programming, manipulation, and visualization of FPGA-based development boards located in the institution’s laboratory, from anywhere and at any time. The connectivist model allows learners to interact with multiple nodes for attending synchronous classes, performing laboratory exercises, managing the remote laboratory, and accessing educational resources asynchronously. This approach aims to enhance learning, knowledge transfer, and skills development. A four-year evaluation was conducted, including one experimental group using an e-learning approach and three in-person control groups from a Digital Logic Design course. The experimental group achieved an average performance score of 9.777, surpassing the control groups, suggesting improved academic outcomes with the proposed system. Additionally, a Technology Acceptance Model-based survey showed very high acceptance among learners. This paper presents a novel connectivist model, which we call the Massive Open Online Laboratory. Full article
Show Figures

Figure 1

30 pages, 2823 KB  
Article
ADAEN: Adaptive Diffusion Adversarial Evolutionary Network for Unsupervised Anomaly Detection in Tabular Data
by Yong Lu, Sen Wang, Lingjun Kong and Wenju Wang
Appl. Syst. Innov. 2026, 9(2), 36; https://doi.org/10.3390/asi9020036 - 30 Jan 2026
Viewed by 645
Abstract
Existing unsupervised anomaly detection methods suffer from insufficient parameter precision, poor robustness to noise, and limited generalization capability. To address these issues, this paper proposes an Adaptive Diffusion Adversarial Evolutionary Network (ADAEN) for unsupervised anomaly detection in tabular data. The proposed network employs [...] Read more.
Existing unsupervised anomaly detection methods suffer from insufficient parameter precision, poor robustness to noise, and limited generalization capability. To address these issues, this paper proposes an Adaptive Diffusion Adversarial Evolutionary Network (ADAEN) for unsupervised anomaly detection in tabular data. The proposed network employs an adaptive hierarchical feature evolution generator that captures multi-scale feature representations at different abstraction levels through learnable attribute encoding and a three-layer Transformer encoder, effectively mitigating the gradient vanishing problem and the difficulty of modeling complex feature relationships that are commonly observed in conventional generators. ADAEN incorporates a multi-scale adaptive diffusion-augmented discriminator, which preserves scale-specific features across different diffusion stages via cosine-scheduled adaptive noise injection, thereby endowing the discriminator with diffusion-stage awareness. Furthermore, ADAEN introduces a multi-scale robust adversarial gradient loss function that ensures training stability through a diffusion-step-conditional Wasserstein loss combined with gradient penalty. The method has been evaluated on 14 UCI benchmark datasets and achieves state-of-the-art performance in anomaly detection compared to existing advanced algorithms, with an average improvement of 8.3% in AUC, an 11.2% increase in F1-Score, and a 15.7% reduction in false positive rate. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

34 pages, 5402 KB  
Review
The Rise of Foundation Models: Opportunities, Technology, Applications, Challenges, Recent Trends, and Future Directions
by Ali Hussain, Umm E. Farwa, Sikandar Ali and Hee-Cheol Kim
Appl. Syst. Innov. 2026, 9(2), 35; https://doi.org/10.3390/asi9020035 - 30 Jan 2026
Viewed by 3368
Abstract
Foundation models (FMs) have become a paradigm shift in the field of artificial intelligence, allowing one large-scale pretrained model to be customized for a broad set of downstream tasks using very little task-specific data. These models, which include GPT, CLIP, BERT, and vision [...] Read more.
Foundation models (FMs) have become a paradigm shift in the field of artificial intelligence, allowing one large-scale pretrained model to be customized for a broad set of downstream tasks using very little task-specific data. These models, which include GPT, CLIP, BERT, and vision transformers, have altered the scope of transfer learning and multimodal understanding and are built on top of enormous datasets and self-supervised learning. The paper provides a broad view of the modern state of foundation models, with an emphasis on their technological foundation, training, and cross-domain use in fields like natural language processing, computer vision, healthcare, robotics and scientific discovery. We also explore the main opportunities that FMs offer, as well as state-of-the-art methods and techniques for the development of foundation models. we discuss their applications in natural language processing, computer vision, healthcare, etc. Furthermore, their limitations and challenges are also investigated. Lastly, future prospects are discussed so that professionals and scientists obtain a better understanding of the importance of foundation models for addressing their research goals. Full article
Show Figures

Figure 1

32 pages, 2264 KB  
Article
Hybrid Fuzzy–Rough MCDM Framework and Decision Support Application for Sustainable Evaluation of Virtualization Technologies
by Seren Başaran
Appl. Syst. Innov. 2026, 9(2), 34; https://doi.org/10.3390/asi9020034 - 30 Jan 2026
Viewed by 680
Abstract
Sustainable virtualization is essential for enterprises seeking to reduce energy use, increase resource efficiency, and connect IT operations with global sustainability goals. This study describes a hybrid decision-support framework that uses the ISO/IEC 25010 quality characteristics and sustainability factors to evaluate virtualization technologies [...] Read more.
Sustainable virtualization is essential for enterprises seeking to reduce energy use, increase resource efficiency, and connect IT operations with global sustainability goals. This study describes a hybrid decision-support framework that uses the ISO/IEC 25010 quality characteristics and sustainability factors to evaluate virtualization technologies using FAHP, RST, and TOPSIS. To obtain robust FAHP weights in uncertain situations, expert linguistic assessments are converted into fuzzy pairwise comparisons. RST is then used to determine the most important sustainability criteria, thereby improving interpretability while minimizing model complexity. TOPSIS compares virtualization platforms to the best sustainability solution. Empirical validation involved five domain experts, eight criteria, and four virtualization platforms. Performance efficiency, reliability, and security are the main criteria, with lightweight, resource-efficient hypervisors scoring highest in sustainability factors. To implement the framework, a lightweight web-based decision-support dashboard was developed. The dashboard allows real-time FAHP computation, RST reduct extraction, TOPSIS ranking visualization, and automatic sustainability reporting. The proposed technique provides a clear, replicable, and functional tool for sustainability-focused virtualization decisions. It helps IT administrators link digital infrastructure planning with the SDG-driven green IT objectives. Full article
(This article belongs to the Topic Collection Series on Applied System Innovation)
Show Figures

Figure 1

28 pages, 6469 KB  
Article
Planning Product Upgrades: A Method for Defining Release Types and Their Strategies for Software-Intensive Products
by Armin Stein, Umut Volkan Kizgin, Mohammad Albittar and Thomas Vietor
Appl. Syst. Innov. 2026, 9(2), 33; https://doi.org/10.3390/asi9020033 - 28 Jan 2026
Viewed by 715
Abstract
The environment of today’s companies is marked by increasing dynamism. Rapid technological developments, strong innovation impulses, and continual market entry of new competitors create volatile conditions that make the delivery of valuable products challenging. Long-term corporate success therefore depends on offering a product [...] Read more.
The environment of today’s companies is marked by increasing dynamism. Rapid technological developments, strong innovation impulses, and continual market entry of new competitors create volatile conditions that make the delivery of valuable products challenging. Long-term corporate success therefore depends on offering a product portfolio consistently aligned with evolving market needs. Customers expect products that show continuous improvements in performance and functionality over time, making systematic product upgrading a key success factor. Release planning addresses this need by enabling continuous product evolution through planned product upgrades. It focuses on selecting and combining functional units for structured publication within releases. This proactive management of product value offers substantial potential but also demands comprehensive know-how, particularly given rising product complexity and the interplay of multiple technologies. The objective of this work is to develop a methodology that supports effective planning of product upgrades. The method assists in the product-specific selection of release types and the derivation of suitable release strategies. It yields release units defined by product structure and provides recommendations for appropriate release strategies. The methodology is demonstrated through its application to an electric vehicle, illustrating its practical relevance for software-intensive products. Full article
(This article belongs to the Section Industrial and Manufacturing Engineering)
Show Figures

Figure 1

18 pages, 3833 KB  
Article
A Data-Driven Two-Phase Energy Consumption Prediction Method for Injection Compressor Systems in Underground Gas Storage
by Ying Yang, De Tang, Guicheng Yu, Junchi Zhou, Jinsong Yang, Tingting Jiang, Zixu Huang and Jianguo Miao
Appl. Syst. Innov. 2026, 9(2), 32; https://doi.org/10.3390/asi9020032 - 28 Jan 2026
Viewed by 507
Abstract
Since the compressor system in underground gas storage (UGS) facilities operates under highly dynamic and complex injection conditions, traditional rule-based operation and mechanism-based modeling approaches prove inadequate for meeting the stringent requirements of high-accuracy prediction under such variable conditions. To address this, a [...] Read more.
Since the compressor system in underground gas storage (UGS) facilities operates under highly dynamic and complex injection conditions, traditional rule-based operation and mechanism-based modeling approaches prove inadequate for meeting the stringent requirements of high-accuracy prediction under such variable conditions. To address this, a data-driven two-phase prediction framework for compressor energy consumption is proposed. In the first phase, a convolutional neural network with efficient channel attention (CNN-ECA) is developed to accurately forecast key operating condition parameters. Based on these outputs, the second phase employs a compressor performance prediction model to estimate unit energy consumption with improved precision. In addition, a hybrid prediction strategy integrating a Transformer architecture is introduced to capture long-range temporal dependencies, thereby enhancing both single-step and multi-step forecasting performance. The proposed method is evaluated using operational data from eight compressors at the Xiangguosi underground gas storage. Experimental results show that the framework achieves high prediction accuracy, with a MAPE of 4.0779% (single-step) and 4.2449% (multi-step), outperforming advanced benchmark models. Full article
Show Figures

Figure 1

25 pages, 9359 KB  
Article
A Multi-Stage Algorithm of Fringe Map Reconstruction for Fiber-End Surface Analysis and Non-Phase-Shifting Interferometry
by Ilya Galaktionov and Vladimir Toporovsky
Appl. Syst. Innov. 2026, 9(2), 31; https://doi.org/10.3390/asi9020031 - 27 Jan 2026
Cited by 1 | Viewed by 893
Abstract
Interferometers are essential tools for quality control of optical surfaces. While interferometric techniques like phase-shifting interferometry offer high accuracy, they involve complex setups, require stringent calibration, and are sensitive to phase shift errors, noise, and surface inhomogeneities. In this research, we introduce an [...] Read more.
Interferometers are essential tools for quality control of optical surfaces. While interferometric techniques like phase-shifting interferometry offer high accuracy, they involve complex setups, require stringent calibration, and are sensitive to phase shift errors, noise, and surface inhomogeneities. In this research, we introduce an alternative algorithm that integrates Moving Average and Fast Fourier Transform (MAFFT) techniques with Polynomial Fitting. The proposed method achieves results comparable to a Zygo interferometer under standard conditions, with an error margin under 2%. It also maintains measurement stability in noisy environments and in the presence of significant local inhomogeneities, operating in real-time to enable wavefront measurements at 30 Hz. We have validated the algorithm through simulations assessing noise-induced errors and through experimental comparisons with a Zygo interferometer. Full article
(This article belongs to the Section Information Systems)
Show Figures

Figure 1

22 pages, 1893 KB  
Article
A System-Level Decision-Support Framework for Integrated Operating Room and Bed Capacity Planning Under Emergency Uncertainty
by Beshoy Botros, Mohamed Gheith and Amr Eltawil
Appl. Syst. Innov. 2026, 9(2), 30; https://doi.org/10.3390/asi9020030 - 27 Jan 2026
Viewed by 917
Abstract
Coordinating operating room schedules with downstream inpatient bed availability remains a critical challenge for hospitals, particularly under emergency-driven uncertainty. Emergency arrivals introduce variability that propagates congestion across surgical and inpatient systems, reducing elective surgery throughput and resource utilization. Existing approaches often treat operating [...] Read more.
Coordinating operating room schedules with downstream inpatient bed availability remains a critical challenge for hospitals, particularly under emergency-driven uncertainty. Emergency arrivals introduce variability that propagates congestion across surgical and inpatient systems, reducing elective surgery throughput and resource utilization. Existing approaches often treat operating rooms and inpatient beds as isolated planning problems, limiting the ability to anticipate system-wide congestion effects. This study proposes a system-level decision-support framework that integrates elective operating room scheduling, emergency arrivals, and inpatient bed capacity within a unified stochastic optimization model. Uncertainty in surgical duration and patient length of stay is represented through scenario-based stochastic modeling. Computational experiments examine system performance under varying levels of emergency demand and bed availability. The results identify critical congestion thresholds beyond which elective throughput deteriorates rapidly, highlighting the role of downstream bed constraints in governing system capacity under uncertainty. The proposed framework provides hospital managers with practical insights for coordinated surgical and inpatient capacity planning, bridging operations research optimization with operations management principles at the system level. Full article
Show Figures

Figure 1

21 pages, 3516 KB  
Article
Visual Navigation Using Depth Estimation Based on Hybrid Deep Learning in Sparsely Connected Path Networks for Robustness and Low Complexity
by Huda Al-Saedi, Pedram Salehpour and Seyyed Hadi Aghdasi
Appl. Syst. Innov. 2026, 9(2), 29; https://doi.org/10.3390/asi9020029 - 27 Jan 2026
Viewed by 790
Abstract
Robot navigation refers to a robot’s ability to determine its position within a reference frame and plan a path to a target location. Visual navigation, which relies on visual sensors such as cameras, is one approach to this problem. Among visual navigation methods, [...] Read more.
Robot navigation refers to a robot’s ability to determine its position within a reference frame and plan a path to a target location. Visual navigation, which relies on visual sensors such as cameras, is one approach to this problem. Among visual navigation methods, Visual Teach and Repeat (VT&R) techniques are commonly used. To develop an effective robot navigation framework based on the VT&R method, accurate and fast depth estimation of the scene is essential. In recent years, event cameras have garnered significant interest from machine vision researchers due to their numerous advantages and applicability in various environments, including robotics and drones. However, the main gap is how these cameras are used in a navigation system. The current research uses the attention-based UNET neural network to estimate the depth of a scene using an event camera. The attention-based UNET structure leads to accurate depth detection of the scene. This depth information is then used, together with a hybrid deep neural network consisting of a Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM), for robot navigation. Simulation results on the DENSE dataset yield an RMSE of 8.15, which is an acceptable result compared to other similar methods. This method not only provides good accuracy but also operates at high speed, making it suitable for real-time applications and visual navigation methods based on VT&R. Full article
(This article belongs to the Special Issue AI-Driven Decision Support for Systemic Innovation)
Show Figures

Figure 1

23 pages, 1177 KB  
Article
Scenario-Based Analysis of the Future Technological Trends in the Automotive Sector in Southeast Lower-Saxony
by Armin Stein, Lars Everding, Henrik Münchhausen, Björn Krüger, Bassem Hichri, Maximilian Flormann, Axel Wolfgang Sturm and Thomas Vietor
Appl. Syst. Innov. 2026, 9(2), 28; https://doi.org/10.3390/asi9020028 - 26 Jan 2026
Cited by 1 | Viewed by 1056
Abstract
The automotive industry faces radical technological change, driven by the adoption of electrification, automation, and digitalization. As a leading industrial hub with key OEMs and suppliers, such as Volkswagen, Southeast Lower Saxony is disproportionately impacted by this structural transformation. As a consequence of [...] Read more.
The automotive industry faces radical technological change, driven by the adoption of electrification, automation, and digitalization. As a leading industrial hub with key OEMs and suppliers, such as Volkswagen, Southeast Lower Saxony is disproportionately impacted by this structural transformation. As a consequence of these trends, the region’s automotive base faces economic uncertainties, local regulatory lag, and technological disruptions. In this study a scenario planning methodology is conducted, to identify three potential mobility futures for 2035: a Best-Case scenario, where innovation and favorable policies enable a stable growth environment for the local automotive industry; a Trend scenario, marked by incremental yet uneven progress, while maintaining the current status quo; and a Worst-Case scenario, defined by economic stagnation and regulatory impediments, leading to a slow degradation of the regional automotive industry. The scenarios are then evaluated based upon their impact and probability of occurrence, while individual impact factors were also prepared and categorized to support future decision-making on a topical basis. This study offers an overview of potential scenarios for the Southeast Lower Saxon automotive industry, supporting the strategic decision-making. Full article
(This article belongs to the Section Industrial and Manufacturing Engineering)
Show Figures

Figure 1

18 pages, 626 KB  
Article
Modeling a Reliable Intermodal Routing Problem for Emergency Materials in the Early Stage of Post-Disaster Recovery Under Uncertainty of Demand and Capacity
by Yu Huang, Haochu Cui, Yue Lu and Yan Sun
Appl. Syst. Innov. 2026, 9(2), 27; https://doi.org/10.3390/asi9020027 - 23 Jan 2026
Cited by 1 | Viewed by 754
Abstract
This study investigates an intermodal routing problem for emergency materials in the early stage of post-disaster recovery, in which the rapid transportation of emergency materials is formulated as the objective. To achieve reliable transportation that can avoid transportation interruption, this study formulates the [...] Read more.
This study investigates an intermodal routing problem for emergency materials in the early stage of post-disaster recovery, in which the rapid transportation of emergency materials is formulated as the objective. To achieve reliable transportation that can avoid transportation interruption, this study formulates the uncertainty of both emergency materials’ demand and the network capacity by LR triangular fuzzy numbers, and thus explores a reliable routing problem for transporting emergency materials that is further formulated by a fuzzy linear programming model. Considering the decision makers’ cautious attitude on the transportation of emergency materials to avoid transportation interruption, this study adopts chance-constrained programming based on necessity measure to build a solvable reformulation of the proposed model. A numerical case study is carried out to reveal the conflicting relationship between improving the reliability and reducing the time of transporting emergency materials. The decision-makers of the emergency materials transportation organization should select a reasonable confidence level based on the actual decision-making scenario to plan the reliable intermodal route for emergency materials. By comparing with deterministic modeling, this study verifies the feasibility of the modeling the uncertainty of both demand and capacity in avoiding unreliable transportation and enhancing the flexibility of the intermodal routing for emergency materials. By comparing with chance-constrained programming using possibility measure, this study demonstrates the feasibility of the necessity measure in planning the reliable intermodal route. This study further analyzes how the capacity level of the intermodal network, demand level of the emergency materials and stability of the LR triangular fuzzy parameters influence the optimization results. Accordingly, this study emphasizes the importance of objectively evaluating the uncertain demand for emergency materials, and reveals that the enhancement of the capacity level of the intermodal network and stability of LR triangular fuzzy parameters is able to reduce the transportation time of emergency materials and meanwhile maintain a high reliability. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop