Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (87)

Search Parameters:
Keywords = software debugging

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 8625 KB  
Article
Study on Simulation and Debugging of Electric Vehicle Control System
by Shaobo Wen, Jiacheng Xie, Yipeng Gong, Zhendong Zhao and Sufang Zhao
World Electr. Veh. J. 2026, 17(2), 57; https://doi.org/10.3390/wevj17020057 - 23 Jan 2026
Viewed by 110
Abstract
With the rapid advancement of intelligent technologies in electric vehicles, various control technologies and algorithms are emerging. Most existing research, however, is limited to simulations of single modules such as suspension, braking, and battery management, lacking comprehensive modeling and simulation for the entire [...] Read more.
With the rapid advancement of intelligent technologies in electric vehicles, various control technologies and algorithms are emerging. Most existing research, however, is limited to simulations of single modules such as suspension, braking, and battery management, lacking comprehensive modeling and simulation for the entire vehicle system, which impedes the integrated development and verification of advanced intelligent technologies. Therefore, this article focuses on the vehicle control system of electric vehicles. It first analyzes the overall scheme and clarifies the core functions of system operation control, fault detection, and storage. Subsequently, a data acquisition simulation platform for the vehicle control system is established based on MATLAB/Simulink, creating simulation modules for accelerator pedal, braking pedal, key position, and gear signal, forming a complete vehicle simulation platform. For the established simulation platform, specific electric vehicle model parameters are set, and under the QC/T759 urban driving conditions, simulations of the electric vehicle’s operation are conducted to obtain relevant signals such as vehicle speed, accelerator pedal, and braking pedal, verifying the feasibility of the vehicle control system. Finally, a hardware platform for the entire vehicle power system is built, and based on the PCAN-Explorer5 software, the connection and debugging of the vehicle controller, battery management system, and motor control unit are achieved to obtain the status parameters of each system and debug the vehicle control system, laying the foundation for the actual operation of the pure electric SUV. Through the simulation of the electric vehicle’s control system, the R&D cycle is greatly shortened, development costs are reduced, and a foundation is established for the actual vehicle debugging of electric vehicles. Full article
Show Figures

Figure 1

28 pages, 1655 KB  
Article
Enhancing P Systems for Complex Biological Simulations
by Aya Allah Elsayed, Raquel Ceprián, Ahmed Ibrahem Hafez, Carlos Llorens and José M. Sempere
Appl. Sci. 2026, 16(2), 705; https://doi.org/10.3390/app16020705 - 9 Jan 2026
Viewed by 141
Abstract
Membrane computing, and more specifically P systems, has been a useful tool in the simulation of biological systems, both at the biomolecular and cellular levels, and also in microbial and ecological communities. The need for greater realism in the simulations of these systems [...] Read more.
Membrane computing, and more specifically P systems, has been a useful tool in the simulation of biological systems, both at the biomolecular and cellular levels, and also in microbial and ecological communities. The need for greater realism in the simulations of these systems has been growing in the recent years. Thus, it has become clear that the rules, objects and structures of P systems cannot always be useful to model some aspects of biological systems. Specifically, some aspects of population dynamics were not perfectly reflected in the P systems that supported these models. In this work we propose new types of rules that help to model some aspects of biological systems in a more realistic way. Fundamentally, our proposal focuses on the use of probabilistic parameters that help create probabilistic and stochastic models for biological systems. In addition, given the high complexity of some of these systems, in this work we describe two software tools that we have developed and that help in the validation and debugging of these systems. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

32 pages, 7978 KB  
Article
A Digital Twin Approach for Spacecraft On-Board Software Development and Testing
by Andrea Colagrossi, Stefano Silvestrini, Andrea Brandonisio and Michèle Lavagna
Aerospace 2026, 13(1), 55; https://doi.org/10.3390/aerospace13010055 - 6 Jan 2026
Viewed by 301
Abstract
The increasing complexity of spacecraft On-Board Software (OBSW) necessitates advanced development and testing methodologies to ensure reliability and robustness. This paper presents a digital twin approach for the development and testing of embedded spacecraft software. The proposed electronic digital twin enables high-fidelity hardware [...] Read more.
The increasing complexity of spacecraft On-Board Software (OBSW) necessitates advanced development and testing methodologies to ensure reliability and robustness. This paper presents a digital twin approach for the development and testing of embedded spacecraft software. The proposed electronic digital twin enables high-fidelity hardware and software simulations of spacecraft subsystems, facilitating a comprehensive validation framework. Through real-time execution, the digital twin supports dynamical simulations with possibility of failure injections, enabling the observation of software behavior under various nominal or fault conditions. This capability allows for thorough debugging and verification of critical software components, including Finite State Machines (FSM), Guidance, Navigation, and Control (GNC) algorithms, and platform and mode management logic. By providing an interactive and iterative environment for software validation in nominal and contingency scenarios, the digital twin reduces the need for extensive Hardware-in-the-Loop (HIL) testing, accelerating the software development life-cycle while improving reliability. The paper discusses the architecture and implementation of the digital twin, along with case studies based on a modular OBSW architecture, demonstrating its effectiveness in identifying and resolving software anomalies. This approach offers a cost-effective and scalable solution for spacecraft software development, enhancing mission safety and performance. Full article
Show Figures

Figure 1

27 pages, 16424 KB  
Article
A Software-Defined Gateway Architecture with Graphical Protocol Modeling for Industrial Control Systems
by Rong Zheng, Song Zheng, Chaoru Liu, Liang Yue and Hongyu Wu
Electronics 2025, 14(22), 4369; https://doi.org/10.3390/electronics14224369 - 8 Nov 2025
Cited by 1 | Viewed by 794
Abstract
Within the context of Industry 4.0, the integration of heterogeneous industrial devices into unified control and supervision systems remains a fundamental challenge due to diversified communication protocols and interfaces. Conventional industrial gateways relying on customized driver development encounter issues such as high protocol [...] Read more.
Within the context of Industry 4.0, the integration of heterogeneous industrial devices into unified control and supervision systems remains a fundamental challenge due to diversified communication protocols and interfaces. Conventional industrial gateways relying on customized driver development encounter issues such as high protocol extension costs, lengthy development cycles, and limited compatibility, restricting the agility and scalability of modern industrial embedded control systems. This paper proposes a novel paradigm for industrial interoperability gateways based on a software-defined architecture and graphical modeling. Through layered decoupling of software functions, protocol parsing and data conversion functionalities are encapsulated into draggable graphical components, enabling flexible adaptation and efficient debugging of heterogeneous protocols. The gateway middleware GMGbox architecture was designed and validated through an experimental platform. Results demonstrate that the gateway can accurately and concurrently parse multiple heterogeneous protocols via graphical configuration, stably acquire over 3000 data points, support online visual debugging and flexible deployment of protocol logic, and seamlessly integrate final data with upper-layer systems through standard protocols. Case studies of a level control system and a wastewater treatment plant Supervisory Control and Data Acquisition (SCADA) further validate the effectiveness and practical utility of this paradigm in real-world industrial scenarios. The proposed solution provides a novel architectural paradigm for building reconfigurable and maintainable systems in the field of industrial automation. Full article
Show Figures

Figure 1

19 pages, 912 KB  
Article
An Integrated Co-Simulation Framework for the Design, Analysis, and Performance Assessment of EIS-Based Measurement Systems for the Online Monitoring of Battery Cells
by Nicola Lowenthal, Roberta Ramilli, Marco Crescentini and Pier Andrea Traverso
Batteries 2025, 11(10), 351; https://doi.org/10.3390/batteries11100351 - 26 Sep 2025
Viewed by 887
Abstract
Electrochemical impedance spectroscopy (EIS) is widely used at the laboratory level for monitoring/diagnostics of battery cells, but the design and validation of in situ, online measurement systems based on EIS face challenges due to complex hardware–software interactions and non-idealities. This study aims to [...] Read more.
Electrochemical impedance spectroscopy (EIS) is widely used at the laboratory level for monitoring/diagnostics of battery cells, but the design and validation of in situ, online measurement systems based on EIS face challenges due to complex hardware–software interactions and non-idealities. This study aims to develop an integrated co-simulation framework to support the design, debugging, and validation of EIS measurement systems devoted to the online monitoring of battery cells, helping to predict experimental results and identify/correct the non-ideality effects and sources of uncertainty. The proposed framework models both the hardware and software components of an EIS-based system to simulate and analyze the impedance measurement process as a whole. It takes into consideration the effects of physical non-idealities on the hardware–software interactions and how those affect the final impedance estimate, offering a tool to refine designs and interpret test results. For validation purposes, the proposed general framework is applied to a specific EIS-based laboratory prototype, previously designed by the research group. The framework is first used to debug the prototype by uncovering hidden non-idealities, thus refining the measurement system, and then employed as a digital model of the latter for fast development of software algorithms. Finally, the results of the co-simulation framework are compared against a theoretical model, the real prototype, and a benchtop instrument to assess the global accuracy of the framework. Full article
Show Figures

Figure 1

13 pages, 20004 KB  
Article
Availability Optimization of IoT-Based Online Laboratories: A Microprocessors Laboratory Implementation
by Luis Felipe Zapata-Rivera
Laboratories 2025, 2(3), 18; https://doi.org/10.3390/laboratories2030018 - 28 Aug 2025
Viewed by 932
Abstract
Online laboratories have emerged as a viable alternative for providing hands-on experience to engineering students, especially in fields related to computer, software, and electrical engineering. In particular, remote laboratories enable users to interact in real time with physical hardware via the internet. However, [...] Read more.
Online laboratories have emerged as a viable alternative for providing hands-on experience to engineering students, especially in fields related to computer, software, and electrical engineering. In particular, remote laboratories enable users to interact in real time with physical hardware via the internet. However, current remote laboratory systems often restrict access to a single user per session, limiting broader participation. Embedded systems laboratory activities have traditionally relied on in-person instruction and direct interaction with hardware, requiring significant time for code development, compilation, and hardware testing. Students typically spend an important portion of each session coding and compiling programs, with the remaining time dedicated to hardware implementation, data collection, and report preparation. This paper proposes a remote laboratory implementation that optimizes remote laboratory stations’ availability, allowing users to lock the system only during the project debugging and testing phases while freeing the remote laboratory station for other users during the code development phase. The implementation presented here was developed for a microprocessor laboratory course. It enables users to code the solution in their preferred local or remote environments, then upload the resulting source code to the remote laboratory hardware for cross-compiling, execution, and testing. This approach enhances usability, scalability, and accessibility while preserving the core benefits of hands-on experimentation and collaboration in online embedded systems education. Full article
Show Figures

Figure 1

44 pages, 901 KB  
Article
MetaFFI-Multilingual Indirect Interoperability System
by Tsvi Cherny-Shahar and Amiram Yehudai
Software 2025, 4(3), 21; https://doi.org/10.3390/software4030021 - 26 Aug 2025
Viewed by 1341
Abstract
The development of software applications using multiple programming languages has increased in recent years, as it allows the selection of the most suitable language and runtime for each component of the system and the integration of third-party libraries. However, this practice involves complexity [...] Read more.
The development of software applications using multiple programming languages has increased in recent years, as it allows the selection of the most suitable language and runtime for each component of the system and the integration of third-party libraries. However, this practice involves complexity and error proneness, due to the absence of an adequate system for the interoperability of multiple programming languages. Developers are compelled to resort to workarounds, such as library reimplementation or language-specific wrappers, which are often dependent on C as the common denominator for interoperability. These challenges render the use of multiple programming languages a burdensome and demanding task that necessitates highly skilled developers for implementation, debugging, and maintenance, and raise doubts about the benefits of interoperability. To overcome these challenges, we propose MetaFFI, introducing a fully in-process, plugin-oriented, runtime-independent architecture based on a minimal C abstraction layer. It provides deep binding without relying on a shared object model, virtual machine bytecode, or manual glue code. This architecture is scalable (O(n) integration for n languages) and supports true polymorphic function and object invocation across languages. MetaFFI is based on leveraging FFI and embedding mechanisms, which minimize restrictions on language selection while still enabling full-duplex binding and deep integration. This is achieved by exploiting the less restrictive shallow binding mechanisms (e.g., Foreign Function Interface) to offer deep binding features (e.g., object creation, methods, fields). MetaFFI provides a runtime-independent framework to load and xcall (Cross-Call) foreign entities (e.g., getters, functions, objects). MetaFFI uses Common Data Types (CDTs) to pass parameters and return values, including objects and complex types, and even cross-language callbacks and dynamic calling conventions for optimization. The indirect interoperability approach of MetaFFI has the significant advantage of requiring only 2n mechanisms to support n languages, compared to direct interoperability approaches that need n2 mechanisms. We developed and tested a proof of concept tool interoperating three languages (Go, Python, and Java), on Windows and Ubuntu. To evaluate the approach and the tool, we conducted a user study, with promising results. The MetaFFI framework is available as open source software, including its full source code and installers, to facilitate adoption and collaboration across academic and industrial communities. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

19 pages, 1175 KB  
Article
Empirical Evaluation of Prompting Strategies for Python Syntax Error Detection with LLMs
by Norah Aloufi and Abdulmajeed Aljuhani
Appl. Sci. 2025, 15(16), 9223; https://doi.org/10.3390/app15169223 - 21 Aug 2025
Cited by 1 | Viewed by 2111
Abstract
As large language models (LLMs) are increasingly integrated into software development, there is a growing need to assess how effectively they address subtle programming errors in real-world environments. Accordingly, this study investigates the effectiveness of LLMs in identifying syntax errors within large Python [...] Read more.
As large language models (LLMs) are increasingly integrated into software development, there is a growing need to assess how effectively they address subtle programming errors in real-world environments. Accordingly, this study investigates the effectiveness of LLMs in identifying syntax errors within large Python code repositories. Building on the bug in the code stack (BICS) benchmark, this research expands the evaluation to include additional models, such as DeepSeek and Grok, while assessing their ability to detect errors across varying code lengths and depths. Two prompting strategies—two-shot and role-based prompting—were employed to compare the performance of models including DeepSeek-Chat, DeepSeek-Reasoner, DeepSeek-Coder, and Grok-2-Latest with GPT-4o serving as the baseline. The findings indicate that the DeepSeek models generally outperformed GPT-4o in terms of accuracy (Acc). Notably, DeepSeek-Reasoner exhibited the highest overall performance, achieving an Acc of 86.6% and surpassing all other models, particularly when integrated prompting strategies were used. Nevertheless, all models demonstrated decreased Acc with increasing input length and consistently struggled with certain types of errors, such as missing quotations (MQo). This work provides insight into the current strengths and weaknesses of LLMs within real-world debugging environments, thereby informing ongoing efforts to improve automated software tools. Full article
Show Figures

Figure 1

26 pages, 643 KB  
Article
MetaGAN: Metamorphic GAN-Based Augmentation for Improving Deep Learning-Based Multiple-Fault Localization Without Test Oracles
by Anlin Hu, Wenjiang Feng, Xudong Zhu, Junjie Wang, Yiping Ao and Hao Feng
Electronics 2025, 14(13), 2596; https://doi.org/10.3390/electronics14132596 - 27 Jun 2025
Viewed by 819
Abstract
Modern electronic information system software is becoming increasingly complex, making manual debugging prohibitively expensive and necessitating automated fault localization (FL) methods to prioritize suspicious code segments. While Single-Fault Localization (SFL) methods, such as spectrum-based fault localization (SBFL) and Deep Learning-Based Fault Localization (DLFL), [...] Read more.
Modern electronic information system software is becoming increasingly complex, making manual debugging prohibitively expensive and necessitating automated fault localization (FL) methods to prioritize suspicious code segments. While Single-Fault Localization (SFL) methods, such as spectrum-based fault localization (SBFL) and Deep Learning-Based Fault Localization (DLFL), have demonstrated promising results in localizing individual faults, extending these methods to multiple-fault scenarios remains challenging. Deep Learning–Based Fault Localization (DLFL) methods combine metamorphic testing and clustering to locate multiple faults without relying on test oracles. However, these approaches suffer from a severe class imbalance problem: the number of failed cases (the minority class) is far smaller than that of passed cases (the majority class). To address this issue, we propose MetaGAN: Metamorphic GAN-based Augmentation for Improving Deep Learning-based Multiple-Fault Localization Without Test Oracles. MetaGAN is a novel method that integrates Metamorphic Testing (MT), clustering-based fault isolation, and Generative Adversarial Networks (GANs). The method first utilizes MT to gather information from failed Metamorphic Test Groups (MTGs) and extracts metamorphic features that capture the underlying failure causes to represent each failed MTG; then, these features are used to cluster the failed MTGs into several groups, with each group forming an independent single-fault debugging session; finally, in each session, data augmentation is performed by combining MT with a GAN model to generate failed test cases (the minority class) until their number matches that of passed test cases (the majority class), thereby balancing the dataset for precise DLFL-based fault localization and enabling parallel debugging of multiple faults. Extensive experimental validation on an expanded open-source benchmark shows that, compared with the baseline MetaMDFL, MetaGAN significantly improves fault localization accuracy, particularly in parallel multiple-fault scenarios. Specifically, MetaGAN achieves significant improvements in both the EXAM and the rank metrics, with EXAM showing the highest improvement of 7.81%, the rank showing the highest improvement of 12.71%, and the top-N% showing the highest improvement of 9.62%. This method, through coordinated dynamic feature extraction, adaptive data augmentation, and distributed collaborative debugging, provides a scalable solution for complex systems where test oracles are unavailable, thereby advancing state-of-the-art methods. Full article
(This article belongs to the Special Issue Software Analysis, Quality, and Security)
Show Figures

Figure 1

22 pages, 2535 KB  
Article
Research on a Secure and Reliable Runtime Patching Method for Cyber–Physical Systems and Internet of Things Devices
by Zesheng Xi, Bo Zhang, Aniruddha Bhattacharjya, Yunfan Wang and Chuan He
Symmetry 2025, 17(7), 983; https://doi.org/10.3390/sym17070983 - 21 Jun 2025
Viewed by 1520
Abstract
Recent advances in technologies such as blockchain, the Internet of Things (IoT), Cyber–Physical Systems (CPSs), and the Industrial Internet of Things (IIoT) have driven the digitalization and intelligent transformation of modern industries. However, embedded control devices within power system communication infrastructures have become [...] Read more.
Recent advances in technologies such as blockchain, the Internet of Things (IoT), Cyber–Physical Systems (CPSs), and the Industrial Internet of Things (IIoT) have driven the digitalization and intelligent transformation of modern industries. However, embedded control devices within power system communication infrastructures have become increasingly susceptible to cyber threats due to escalating software complexity and extensive network exposure. We have seen that symmetric conventional patching techniques—both static and dynamic—often fail to satisfy the stringent requirements of real-time responsiveness and computational efficiency in resource-constrained environments of all kinds of power grids. To address this limitation, we have proposed a hardware-assisted runtime patching framework tailored for embedded systems in critical power system networks. Our method has integrated binary-level vulnerability modeling, execution-trace-driven fault localization, and lightweight patch synthesis, enabling dynamic, in-place code redirection without disrupting ongoing operations. By constructing a system-level instruction flow model, the framework has leveraged on-chip debug registers to deploy patches at runtime, ensuring minimal operational impact. Experimental evaluations within a simulated substation communication architecture have revealed that the proposed approach has reduced patch latency by 92% over static techniques, which are symmetrical in a working way, while incurring less than 3% CPU overhead. This work has offered a scalable and real-time model-driven defense strategy that has enhanced the cyber–physical resilience of embedded systems in modern power systems, contributing new insights into the intersection of runtime security and grid infrastructure reliability. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

28 pages, 2380 KB  
Article
A Unified Framework for Automated Testing of Robotic Process Automation Workflows Using Symbolic and Concolic Analysis
by Ciprian Paduraru, Marina Cernat and Adelina-Nicoleta Staicu
Machines 2025, 13(6), 504; https://doi.org/10.3390/machines13060504 - 9 Jun 2025
Cited by 1 | Viewed by 2534
Abstract
Robotic Process Automation is a technology that replicates human interactions with user interfaces across various applications. However, testing Robotic Process Automation implementations remains challenging due to the dynamic nature of workflows. This paper presents a novel testing framework that first integrates symbolic execution [...] Read more.
Robotic Process Automation is a technology that replicates human interactions with user interfaces across various applications. However, testing Robotic Process Automation implementations remains challenging due to the dynamic nature of workflows. This paper presents a novel testing framework that first integrates symbolic execution and concolic testing strategies to enhance Robotic Process Automation workflow validation. Building on insights from these methods, we introduce a hybrid approach that optimizes test coverage and efficiency in specific cases. Our open-source implementation demonstrates that automated testing in the Robotic Process Automation domain significantly improves coverage, reduces manual effort, and enhances reliability. Furthermore, the proposed solution supports multiple Robotic Process Automation platforms and aligns with industry best practices for user interface automation testing. Experimental evaluation, conducted in collaboration with industry, validates the effectiveness of our approach. Full article
(This article belongs to the Special Issue Recent Developments in Machine Design, Automation and Robotics)
Show Figures

Figure 1

13 pages, 817 KB  
Article
Evaluating the Predictive Power of Software Metrics for Fault Localization
by Issar Arab, Kenneth Magel and Mohammed Akour
Computers 2025, 14(6), 222; https://doi.org/10.3390/computers14060222 - 6 Jun 2025
Viewed by 1847
Abstract
Fault localization remains a critical challenge in software engineering, directly impacting debugging efficiency and software quality. This study investigates the predictive power of various software metrics for fault localization by framing the task as a multi-class classification problem and evaluating it using the [...] Read more.
Fault localization remains a critical challenge in software engineering, directly impacting debugging efficiency and software quality. This study investigates the predictive power of various software metrics for fault localization by framing the task as a multi-class classification problem and evaluating it using the Defects4J dataset. We fitted thousands of models and benchmarked different algorithms—including deep learning, Random Forest, XGBoost, and LightGBM—to choose the best-performing model. To enhance model transparency, we applied explainable AI techniques to analyze feature importance. The results revealed that test suite metrics consistently outperform static and dynamic metrics, making them the most effective predictors for identifying faulty classes. These findings underscore the critical role of test quality and coverage in automated fault localization. By combining machine learning with transparent feature analysis, this work delivers practical insights to support more efficient debugging workflows. It lays the groundwork for an iterative process that integrates metric-based predictive models with large language models (LLMs), enabling future systems to automatically generate targeted test cases for the most fault-prone components, which further enhances the automation and precision of software testing. Full article
(This article belongs to the Special Issue Best Practices, Challenges and Opportunities in Software Engineering)
Show Figures

Graphical abstract

17 pages, 4831 KB  
Article
Achieving Low-Latency, High-Throughput Online Partial Particle Identification for the NA62 Experiment Using FPGAs and Machine Learning
by Pierpaolo Perticaroli, Roberto Ammendola, Andrea Biagioni, Carlotta Chiarini, Andrea Ciardiello, Paolo Cretaro, Ottorino Frezza, Francesca Lo Cicero, Michele Martinelli, Roberto Piandani, Luca Pontisso, Mauro Raggi, Cristian Rossi, Francesco Simula, Matteo Turisini, Piero Vicini and Alessandro Lonardo
Electronics 2025, 14(9), 1892; https://doi.org/10.3390/electronics14091892 - 7 May 2025
Cited by 2 | Viewed by 1046
Abstract
FPGA-RICH is an FPGA-based online partial particle identification system for the NA62 experiment employing AI techniques. Integrated between the readout of the Ring Imaging Cherenkov detector (RICH) and the low-level trigger processor (L0TP+), FPGA-RICH implements a fast pipeline to process in real-time the [...] Read more.
FPGA-RICH is an FPGA-based online partial particle identification system for the NA62 experiment employing AI techniques. Integrated between the readout of the Ring Imaging Cherenkov detector (RICH) and the low-level trigger processor (L0TP+), FPGA-RICH implements a fast pipeline to process in real-time the RICH raw hit data stream, producing trigger primitives containing elaborate physics information—e.g., the number of charged particles in a physics event—that L0TP+ can use to improve trigger decision efficiency. Deployed on a single FPGA, the system combines classical online processing with a compact Neural Network algorithm to achieve efficient event classification while managing the challenging ∼10 MHz throughput requirement of NA62. The streaming pipeline ensures ∼1 μs latency, comparable to that of the NA62 detectors, allowing its seamless integration in the existing TDAQ setup as an additional detector. Development leverages High-Level Synthesis (HLS) and the open-source hls4ml package software–hardware codesign workflow, enabling fast and flexible reprogramming, debugging, and performance optimization. We describe the implementation of the full processing pipeline, the Neural Network classifier, their functional validation, performance metrics and the system’s current status and outlook. Full article
(This article belongs to the Special Issue Emerging Applications of FPGAs and Reconfigurable Computing System)
Show Figures

Figure 1

19 pages, 1903 KB  
Article
A Centralized Approach to the Logging Mechanisms of Distributed Complex ERP Applications
by Cosmin Strilețchi, Petre G. Pop and Christian Gavrilă
Information 2025, 16(3), 216; https://doi.org/10.3390/info16030216 - 11 Mar 2025
Cited by 1 | Viewed by 1045
Abstract
Complex software applications traverse a multitude of running or idling states that depend on their implementation phase (development, testing, debugging, and exploitation). The applications can be monitored by letting them produce descriptive messages, the corresponding data being logged and usually marked according to [...] Read more.
Complex software applications traverse a multitude of running or idling states that depend on their implementation phase (development, testing, debugging, and exploitation). The applications can be monitored by letting them produce descriptive messages, the corresponding data being logged and usually marked according to their meaning or severity (info, debug, warning, error and fatal). Our software platform (Crosweb) provides the generic tools for implementing complex Enterprise Resource Planning (ERP) applications and has its component software modules divided into several levels, the main ones being responsible for the infrastructure, the data management, the business logic, and user interfaces. Any of the mentioned components can produce logging messages, and the reporting methods can vary according to the place they occupy in the software hierarchy. The physical location of each software component can differ, their running environments often being distributed across several computing systems connected via various communication protocols. All these factors add complexity to the logged data inspection process. The current paper presents a solution that centralizes the logging information issued by the Crosweb components for ensuring a better exposure of the associated information and for simplifying the solutioning of the reflected problems. Full article
Show Figures

Figure 1

16 pages, 510 KB  
Article
Crashing Fault Residence Prediction Using a Hybrid Feature Selection Framework from Multi-Source Data
by Xiao Liu, Xianmei Fang, Song Sun, Yangchun Gao, Dan Yang and Meng Yan
Appl. Sci. 2025, 15(5), 2635; https://doi.org/10.3390/app15052635 - 28 Feb 2025
Viewed by 968
Abstract
The inherent complexity of modern software frequently leads to critical issues such as defects, performance degradation, and system failures. Among these, system crashes pose a severe threat to reliability, as they demand rapid fault localization to minimize downtime and restore functionality. A critical [...] Read more.
The inherent complexity of modern software frequently leads to critical issues such as defects, performance degradation, and system failures. Among these, system crashes pose a severe threat to reliability, as they demand rapid fault localization to minimize downtime and restore functionality. A critical step of fault localization is predicting the residence of crashing faults, which involves determining whether a fault is located within the stack trace or outside it. This task plays a crucial role in software quality assurance by enhancing debugging efficiency and reducing testing costs. This study introduces SCM, a two-stage composite feature selection framework designed to address this challenge. The SCM framework integrates spectral clustering for feature grouping, which organizes highly correlated features into clusters while reducing redundancy and capturing non-linear relationships. Maximal information coefficient analysis is then applied to rank features within each cluster and select the most relevant ones, forming an optimized feature subset. A decision tree classifier is then applied to predict the residence of crashing faults. Extensive experiments on seven open-source software projects show that the SCM framework outperforms seven baseline methods, which include four classifiers and three ranking approaches, across four evaluation metrics such as F-measure, g-mean, MCC, and AUC. These results highlight its potential in improving fault localization. Full article
Show Figures

Figure 1

Back to TopTop