Previous Issue
Volume 8, June
 
 

Appl. Syst. Innov., Volume 8, Issue 4 (August 2025) – 16 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
22 pages, 2652 KiB  
Article
Niching-Driven Divide-and-Conquer Hill Exploration
by Junchen Wang, Changhe Li and Yiya Diao
Appl. Syst. Innov. 2025, 8(4), 101; https://doi.org/10.3390/asi8040101 - 22 Jul 2025
Abstract
Optimization problems often feature local optima with a significant difference in the basin of attraction (BoA), making evolutionary computation methods prone to discarding solutions located in less-attractive BoAs, thereby posing challenges to the search for optima in these BoAs. To enhance the ability [...] Read more.
Optimization problems often feature local optima with a significant difference in the basin of attraction (BoA), making evolutionary computation methods prone to discarding solutions located in less-attractive BoAs, thereby posing challenges to the search for optima in these BoAs. To enhance the ability to find these optima, various niching methods have been proposed to restrict the competition scope of individuals to their specific neighborhoods. However, redundant searches in more-attractive BoAs as well as necessary searches in less-attractive BoAs can only be promoted simultaneously by these methods. To address this issue, we propose a general framework for niching methods named niching-driven divide-and-conquer hill exploration (NDDCHE). Through gradually learning BoAs from the search results of a niching method and dividing the problem into subproblems with a much smaller number of optima, NDDCHE aims to bring a more balanced distribution of searches in the BoAs of optima found so far, and thus enhance the niching method’s ability to find optima in less-attractive BoAs. Through experiments where niching methods with different categories of niching techniques are integrated with NDDCHE and tested on problems with significant differences in the size of the BoA, the effectiveness and the generalization ability of NDDCHE are proven. Full article
Show Figures

Figure 1

25 pages, 1344 KiB  
Article
Cloud-Based Data-Driven Framework for Optimizing Operational Efficiency and Sustainability in Tube Manufacturing
by Michael Maiko Matonya and István Budai
Appl. Syst. Innov. 2025, 8(4), 100; https://doi.org/10.3390/asi8040100 - 22 Jul 2025
Abstract
Modern manufacturing strives for peak efficiency while facing pressing demands for environmental sustainability. Balancing these often-conflicting objectives represents a fundamental trade-off in modern manufacturing, as traditional methods typically address them in isolation, leading to suboptimal outcomes. Process mining offers operational insights but often [...] Read more.
Modern manufacturing strives for peak efficiency while facing pressing demands for environmental sustainability. Balancing these often-conflicting objectives represents a fundamental trade-off in modern manufacturing, as traditional methods typically address them in isolation, leading to suboptimal outcomes. Process mining offers operational insights but often lacks dynamic environmental indicators, while standard Life Cycle Assessment (LCA) provides environmental evaluation but uses static data unsuitable for real-time optimization. Frameworks integrating real-time data for dynamic multi-objective optimization are scarce. This study proposes a comprehensive, data-driven, cloud-based framework that overcomes these limitations. It uniquely combines three key components: (1) real-time Process Mining for actual workflows and operational KPIs; (2) dynamic LCA using live sensor data for instance-level environmental impacts (energy, emissions, waste) and (3) Multi-Objective Optimization (NSGA-II) to identify Pareto-optimal solutions balancing efficiency and sustainability. TOPSIS assists decision-making by ranking these solutions. Validated using extensive real-world data from a tube manufacturing facility processing over 390,000 events, the framework demonstrated significant, quantifiable improvements. The optimization yielded a Pareto front of solutions that surpassed baseline performance (87% efficiency; 2007.5 kg CO2/day). The optimal balanced solution identified by TOPSIS simultaneously increased operational efficiency by 5.1% and reduced carbon emissions by 12.4%. Further analysis quantified the efficiency-sustainability trade-offs and confirmed the framework’s adaptability to varying strategic priorities through sensitivity analysis. This research offers a validated framework for industrial applications that enables manufacturers to improve both operational efficiency and environmental sustainability in a unified manner, moving beyond the limitations of disconnected tools. The validated integrated framework provides a powerful, data-driven tool, recommended as a valuable approach for industrial applications seeking continuous improvement in both economic and environmental performance dimensions. Full article
Show Figures

Figure 1

32 pages, 1156 KiB  
Article
A Study of the Response Surface Methodology Model with Regression Analysis in Three Fields of Engineering
by Hsuan-Yu Chen and Chiachung Chen
Appl. Syst. Innov. 2025, 8(4), 99; https://doi.org/10.3390/asi8040099 - 21 Jul 2025
Abstract
Researchers conduct experiments to discover factors influencing the experimental subjects, so the experimental design is essential. The response surface methodology (RSM) is a special experimental design used to evaluate factors significantly affecting a process and determine the optimal conditions for different factors. The [...] Read more.
Researchers conduct experiments to discover factors influencing the experimental subjects, so the experimental design is essential. The response surface methodology (RSM) is a special experimental design used to evaluate factors significantly affecting a process and determine the optimal conditions for different factors. The relationship between response values and influencing factors is mainly established using regression analysis techniques. These equations are then used to generate contour and surface response plots to provide researchers with further insights. The impact of regression techniques on response surface methodology (RSM) model building has not been studied in detail. This study uses complete regression techniques to analyze sixteen datasets from the literature on semiconductor manufacturing, steel materials, and nanomaterials. Whether each variable significantly affected the response value was assessed using backward elimination and a t-test. The complete regression techniques used in this study included considering the significant influencing variables of the model, testing for normality and constant variance, using predictive performance criteria, and examining influential data points. The results of this study revealed some problems with model building in RSM studies in the literature from three engineering fields, including the direct use of complete equations without statistical testing, deletion of variables with p-values above a preset value without further examination, existence of non-normality and non-constant variance conditions of the dataset without testing, and presence of some influential data points without examination. Researchers should strengthen training in regression techniques to enhance the RSM model-building process. Full article
Show Figures

Figure 1

25 pages, 4186 KiB  
Review
Total Productive Maintenance and Industry 4.0: A Literature-Based Path Toward a Proposed Standardized Framework
by Zineb Mouhib, Maryam Gallab, Safae Merzouk, Aziz Soulhi and Mario Dinardo
Appl. Syst. Innov. 2025, 8(4), 98; https://doi.org/10.3390/asi8040098 - 21 Jul 2025
Abstract
In the context of Industry 4.0, Total Productive Maintenance (TPM) is undergoing a major shift driven by digital technologies such as the IoT, AI, cloud computing, and Cyber–Physical systems. This study explores how these technologies reshape traditional TPM pillars and practices through a [...] Read more.
In the context of Industry 4.0, Total Productive Maintenance (TPM) is undergoing a major shift driven by digital technologies such as the IoT, AI, cloud computing, and Cyber–Physical systems. This study explores how these technologies reshape traditional TPM pillars and practices through a two-phase methodology: bibliometric analysis, which reveals global research trends, key contributors, and emerging themes, and a systematic review, which discusses how core TPM practices are being transformed by advanced technologies. It also identifies key challenges of this transition, including data aggregation, a lack of skills, and resistance. However, despite the growing body of research on digital TPM, a major gap persists: the lack of a standardized model applicable across industries. Existing approaches are often fragmented or too context-specific, limiting scalability. Addressing this gap requires a structured approach that aligns technological advancements with TPM’s foundational principles. Taking a cue from these findings, this article formulates a systematic and scalable framework for TPM 4.0 deployment. The framework is based on four pillars: modular technological architecture, phased deployment, workforce integration, and standardized performance indicators. The ultimate goal is to provide a basis for a universal digital TPM standard that enhances the efficiency, resilience, and efficacy of smart maintenance systems. Full article
(This article belongs to the Section Industrial and Manufacturing Engineering)
Show Figures

Figure 1

27 pages, 2527 KiB  
Review
A Systematic Review of Responsible Artificial Intelligence Principles and Practice
by Lakshitha Gunasekara, Nicole El-Haber, Swati Nagpal, Harsha Moraliyage, Zafar Issadeen, Milos Manic and Daswin De Silva
Appl. Syst. Innov. 2025, 8(4), 97; https://doi.org/10.3390/asi8040097 - 21 Jul 2025
Abstract
The accelerated development of Artificial Intelligence (AI) capabilities and systems is driving a paradigm shift in productivity, innovation and growth. Despite this generational opportunity, AI is fraught with significant challenges and risks. To address these challenges, responsible AI has emerged as a modus [...] Read more.
The accelerated development of Artificial Intelligence (AI) capabilities and systems is driving a paradigm shift in productivity, innovation and growth. Despite this generational opportunity, AI is fraught with significant challenges and risks. To address these challenges, responsible AI has emerged as a modus operandi that ensures protections while not stifling innovations. Responsible AI minimizes risks to people, society, and the environment. However, responsible AI principles and practice are impacted by ‘principle proliferation’ as they are diverse and distributed across the applications, stakeholders, risks, and downstream impact of AI systems. This article presents a systematic review of responsible AI principles and practice with the objectives of discovering the current state, the foundations and the need for responsible AI, followed by the principles of responsible AI, and translation of these principles into the responsible practice of AI. Starting with 22,711 relevant peer-reviewed articles from comprehensive bibliographic databases, the review filters through to 9700 at de-duplication, 5205 at abstract screening, 1230 at semantic screening and 553 at final full-text screening. The analysis of this final corpus is presented as six findings that contribute towards the increased understanding and informed implementation of responsible AI. Full article
Show Figures

Figure 1

26 pages, 3622 KiB  
Article
Shear Strength Prediction for RCDBs Utilizing Data-Driven Machine Learning Approach: Enhanced CatBoost with SHAP and PDPs Analyses
by Imad Shakir Abbood, Noorhazlinda Abd Rahman and Badorul Hisham Abu Bakar
Appl. Syst. Innov. 2025, 8(4), 96; https://doi.org/10.3390/asi8040096 - 10 Jul 2025
Viewed by 255
Abstract
Reinforced concrete deep beams (RCDBs) provide significant strength and serviceability for building structures. However, a simple, general, and universally accepted procedure for predicting their shear strength (SS) has yet to be established. This study proposes a novel data-driven approach to predicting the SS [...] Read more.
Reinforced concrete deep beams (RCDBs) provide significant strength and serviceability for building structures. However, a simple, general, and universally accepted procedure for predicting their shear strength (SS) has yet to be established. This study proposes a novel data-driven approach to predicting the SS of RCDBs using an enhanced CatBoost (CB) model. For this purpose, a newly comprehensive database of RCDBs with shear failure, including 950 experimental specimens, was established and adopted. The model was developed through a customized procedure including feature selection, data preprocessing, hyperparameter tuning, and model evaluation. The CB model was further evaluated against three data-driven models (e.g., Random Forest, Extra Trees, and AdaBoost) as well as three prominent mechanics-driven models (e.g., ACI 318, CSA A23.3, and EU2). Finally, the SHAP algorithm was employed for interpretation to increase the model’s reliability. The results revealed that the CB model yielded a superior accuracy and outperformed all other models. In addition, the interpretation results showed similar trends between the CB model and mechanics-driven models. The geometric dimensions and concrete properties are the most influential input features on the SS, followed by reinforcement properties. In which the SS can be significantly improved by increasing beam width and concert strength, and by reducing shear span-to-depth ratio. Thus, the proposed interpretable data-driven model has a high potential to be an alternative approach for design practice in structural engineering. Full article
(This article belongs to the Special Issue Recent Developments in Data Science and Knowledge Discovery)
Show Figures

Figure 1

20 pages, 4572 KiB  
Article
Nonlinear Output Feedback Control for Parrot Mambo UAV: Robust Complex Structure Design and Experimental Validation
by Asmaa Taame, Ibtissam Lachkar, Abdelmajid Abouloifa, Ismail Mouchrif and Abdelali El Aroudi
Appl. Syst. Innov. 2025, 8(4), 95; https://doi.org/10.3390/asi8040095 - 7 Jul 2025
Viewed by 312
Abstract
This paper addresses the problem of controlling quadcopters operating in an environment characterized by unpredictable disturbances such as wind gusts. From a control point of view, this is a nonstandard, highly challenging problem. Fundamentally, these quadcopters are high-order dynamical systems characterized by an [...] Read more.
This paper addresses the problem of controlling quadcopters operating in an environment characterized by unpredictable disturbances such as wind gusts. From a control point of view, this is a nonstandard, highly challenging problem. Fundamentally, these quadcopters are high-order dynamical systems characterized by an under-actuated and highly nonlinear model with coupling between several state variables. The main objective of this work is to achieve a trajectory by tracking desired altitude and attitude. The problem was tackled using a robust control approach with a multi-loop nonlinear controller combined with extended Kalman filtering (EKF). Specifically, the flight control system consists of two regulation loops. The first one is an outer loop based on the backstepping approach and allows for control of the elevation as well as the yaw of the quadcopter, while the second one is the inner loop, which allows the maintenance of the desired attitude by adjusting the roll and pitch, whose references are generated by the outer loop through a standard PID, to limit the 2D trajectory to a desired set path. The investigation integrates EKF technique for sensor signal processing to increase measurements accuracy, hence improving robustness of the flight. The proposed control system was formally developed and experimentally validated through indoor tests using the well-known Parrot Mambo unmanned aerial vehicle (UAV). The obtained results show that the proposed flight control system is efficient and robust, making it suitable for advanced UAV navigation in dynamic scenarios with disturbances. Full article
(This article belongs to the Section Control and Systems Engineering)
Show Figures

Figure 1

20 pages, 2918 KiB  
Article
Randomized Feature and Bootstrapped Naive Bayes Classification
by Bharameeporn Phatcharathada and Patchanok Srisuradetchai
Appl. Syst. Innov. 2025, 8(4), 94; https://doi.org/10.3390/asi8040094 - 2 Jul 2025
Viewed by 397
Abstract
Naive Bayes (NB) classifiers are widely used for their simplicity, computational efficiency, and interpretability. However, their predictive performance can degrade significantly in real-world settings where the conditional independence assumption is often violated. More complex NB variants address this issue but typically introduce structural [...] Read more.
Naive Bayes (NB) classifiers are widely used for their simplicity, computational efficiency, and interpretability. However, their predictive performance can degrade significantly in real-world settings where the conditional independence assumption is often violated. More complex NB variants address this issue but typically introduce structural complexity or require explicit dependency modeling, limiting their scalability and transparency. This study proposes two lightweight ensemble-based extensions—randomized feature naive Bayes (RF-NB) and randomized feature bootstrapped naive Bayes (RFB-NB)—designed to enhance robustness and predictive stability without altering the underlying NB model. By integrating randomized feature selection and bootstrap resampling, these methods implicitly reduce feature dependence and noise-induced variance. Evaluation across twenty real-world datasets spanning medical, financial, and industrial domains demonstrates that RFB-NB consistently outperformed classical NB, RF-NB, and k-nearest neighbor in several cases. Although random forest achieved higher average accuracy overall, RFB-NB demonstrated comparable accuracy with notably lower variance and improved predictive stability specifically in datasets characterized by high noise levels, large dimensionality, or significant class imbalance. These findings underscore the practical and complementary advantages of RFB-NB in challenging classification scenarios. Full article
(This article belongs to the Special Issue Recent Developments in Data Science and Knowledge Discovery)
Show Figures

Figure 1

16 pages, 3059 KiB  
Article
OFF-The-Hook: A Tool to Detect Zero-Font and Traditional Phishing Attacks in Real Time
by Nazar Abbas Saqib, Zahrah Ali AlMuraihel, Reema Zaki AlMustafa, Farah Amer AlRuwaili, Jana Mohammed AlQahtani, Amal Aodah Alahmadi, Deemah Alqahtani, Saad Abdulrahman Alharthi, Sghaier Chabani and Duaa Ali AL Kubaisy
Appl. Syst. Innov. 2025, 8(4), 93; https://doi.org/10.3390/asi8040093 - 30 Jun 2025
Viewed by 350
Abstract
Phishing attacks continue to pose serious challenges to cybersecurity, with attackers constantly refining their methods to bypass detection systems. One particularly evasive technique is Zero-Font phishing, which involves the insertion of invisible or zero-sized characters into email content to deceive both users and [...] Read more.
Phishing attacks continue to pose serious challenges to cybersecurity, with attackers constantly refining their methods to bypass detection systems. One particularly evasive technique is Zero-Font phishing, which involves the insertion of invisible or zero-sized characters into email content to deceive both users and traditional email filters. Because these characters are not visible to human readers but still processed by email systems, they can be used to evade detection by traditional email filters, obscuring malicious intent in ways that bypass basic content inspection. This study introduces a proactive phishing detection tool capable of identifying both traditional and Zero-Font phishing attempts. The proposed tool leverages a multi-layered security framework, combining structural inspection and machine learning-based classification to detect both traditional and Zero-Font phishing attempts. At its core, the system incorporates an advanced machine learning model trained on a well-established dataset comprising both phishing and legitimate emails. The model alone achieves an accuracy rate of up to 98.8%, contributing significantly to the overall effectiveness of the tool. This hybrid approach enhances the system’s robustness and detection accuracy across diverse phishing scenarios. The findings underscore the importance of multi-faceted detection mechanisms and contribute to the development of more resilient defenses in the ever-evolving landscape of cybersecurity threats. Full article
(This article belongs to the Special Issue The Intrusion Detection and Intrusion Prevention Systems)
Show Figures

Figure 1

30 pages, 3461 KiB  
Article
A Privacy-Preserving Record Linkage Method Based on Secret Sharing and Blockchain
by Shumin Han, Zikang Wang, Qiang Zhao, Derong Shen, Chuang Wang and Yangyang Xue
Appl. Syst. Innov. 2025, 8(4), 92; https://doi.org/10.3390/asi8040092 - 28 Jun 2025
Viewed by 361
Abstract
Privacy-preserving record linkage (PPRL) aims to link records from different data sources while ensuring sensitive information is not disclosed. Utilizing blockchain as a trusted third party is an effective strategy for enhancing transparency and auditability in PPRL. However, to ensure data privacy during [...] Read more.
Privacy-preserving record linkage (PPRL) aims to link records from different data sources while ensuring sensitive information is not disclosed. Utilizing blockchain as a trusted third party is an effective strategy for enhancing transparency and auditability in PPRL. However, to ensure data privacy during computation, such approaches often require computationally intensive cryptographic techniques. This can introduce significant computational overhead, limiting the method’s efficiency and scalability. To address this performance bottleneck, we combine blockchain with the distributed computation of secret sharing to propose a PPRL method based on blockchain-coordinated distributed computation. At its core, the approach utilizes Bloom filters to encode data and employs Boolean and arithmetic secret sharing to decompose the data into secret shares, which are uploaded to the InterPlanetary File System (IPFS). Combined with masking and random permutation mechanisms, it enhances privacy protection. Computing nodes perform similarity calculations locally, interacting with IPFS only a limited number of times, effectively reducing communication overhead. Furthermore, blockchain manages the entire computation process through smart contracts, ensuring transparency and correctness of the computation, achieving efficient and secure record linkage. Experimental results demonstrate that this method effectively safeguards data privacy while exhibiting high linkage quality and scalability. Full article
Show Figures

Figure 1

26 pages, 1806 KiB  
Article
From Transactions to Transformations: A Bibliometric Study on Technology Convergence in E-Payments
by Priyanka C. Bhatt, Yu-Chun Hsu, Kuei-Kuei Lai and Vinayak A. Drave
Appl. Syst. Innov. 2025, 8(4), 91; https://doi.org/10.3390/asi8040091 - 28 Jun 2025
Viewed by 470
Abstract
This study investigates the convergence of blockchain, artificial intelligence (AI), near-field communication (NFC), and mobile technologies in electronic payment (e-payment) systems, proposing an innovative integrative framework to deconstruct the systemic innovations and transformative impacts driven by such technological synergy. Unlike prior research, which [...] Read more.
This study investigates the convergence of blockchain, artificial intelligence (AI), near-field communication (NFC), and mobile technologies in electronic payment (e-payment) systems, proposing an innovative integrative framework to deconstruct the systemic innovations and transformative impacts driven by such technological synergy. Unlike prior research, which often focuses on single-technology adoption, this study uniquely adopts a cross-technology convergence perspective. To our knowledge, this is the first study to empirically map the multi-technology convergence landscape in e-payment using scientometric techniques. By employing bibliometric and thematic network analysis methods, the research maps the intellectual evolution and key research themes of technology convergence in e-payment systems. Findings reveal that while the integration of these technologies holds significant promise, improving transparency, scalability, and responsiveness, it also presents challenges, including interoperability barriers, privacy concerns, and regulatory complexity. Furthermore, this study highlights the potential for convergent technologies to unintentionally deepen the digital divide if not inclusively designed. The novelty of this study is threefold: (1) theoretical contribution—this study expands existing frameworks of technology adoption and digital governance by introducing an integrated perspective on cross-technology adoption and regulatory responsiveness; (2) practical relevance—it offers actionable, stakeholder-specific recommendations for policymakers, financial institutions, developers, and end-users; (3) methodological innovation—it leverages scientometric and topic modeling techniques to capture the macro-level trajectory of technology convergence, complementing traditional qualitative insights. In conclusion, this study advances the theoretical foundations of digital finance and provides forward-looking policy and managerial implications, paving the way for a more secure, inclusive, and innovation-driven digital payment ecosystem. Full article
(This article belongs to the Topic Social Sciences and Intelligence Management, 2nd Volume)
Show Figures

Figure 1

23 pages, 3736 KiB  
Article
Performance Analysis of a Hybrid Complex-Valued CNN-TCN Model for Automatic Modulation Recognition in Wireless Communication Systems
by Hamza Ouamna, Anass Kharbouche, Noureddine El-Haryqy, Zhour Madini and Younes Zouine
Appl. Syst. Innov. 2025, 8(4), 90; https://doi.org/10.3390/asi8040090 - 28 Jun 2025
Viewed by 420
Abstract
This paper presents a novel deep learning-based automatic modulation recognition (AMR) model, designed to classify ten modulation types from complex I/Q signal data. The proposed architecture, named CV-CNN-TCN, integrates Complex-Valued Convolutional Neural Networks (CV-CNNs) with Temporal Convolutional Networks (TCNs) to jointly extract spatial [...] Read more.
This paper presents a novel deep learning-based automatic modulation recognition (AMR) model, designed to classify ten modulation types from complex I/Q signal data. The proposed architecture, named CV-CNN-TCN, integrates Complex-Valued Convolutional Neural Networks (CV-CNNs) with Temporal Convolutional Networks (TCNs) to jointly extract spatial and temporal features while preserving the inherent phase information of the signal. An enhanced variant, CV-CNN-TCN-DCC, incorporates dilated causal convolutions to further strengthen temporal representation. The models are trained and evaluated on the benchmark RadioML2016.10b dataset. At SNR = −10 dB, the CV-CNN-TCN achieves a classification accuracy of 37%, while the CV-CNN-TCN-DCC improves to 40%. In comparison, ResNet reaches 33%, and other models such as CLDNN (convolutional LSTM dense neural network) and SCRNN (Sequential Convolutional Recurrent Neural Network) remain below 30%. At 0 dB SNR, the CV-CNN-TCN-DCC achieves a Jaccard index of 0.58 and an MCC of 0.67, outperforming ResNet (0.55, 0.64) and CNN (0.53, 0.61). Furthermore, the CV-CNN-TCN-DCC achieves 75% accuracy at SNR = 10 dB and maintains over 90% classification accuracy for SNRs above 2 dB. These results demonstrate that the proposed architectures, particularly with dilated causal convolutional enhancements, significantly improve robustness and generalization under low-SNR conditions, outperforming state-of-the-art models in both accuracy and reliability. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

26 pages, 8949 KiB  
Article
Real-Time Detection of Hole-Type Defects on Industrial Components Using Raspberry Pi 5
by Mehmet Deniz, Ismail Bogrekci and Pinar Demircioglu
Appl. Syst. Innov. 2025, 8(4), 89; https://doi.org/10.3390/asi8040089 - 27 Jun 2025
Viewed by 462
Abstract
In modern manufacturing, ensuring quality control for geometric features is critical, yet detecting anomalies in circular components remains underexplored. This study proposes a real-time defect detection framework for metal parts with holes, optimized for deployment on a Raspberry Pi 5 edge device. We [...] Read more.
In modern manufacturing, ensuring quality control for geometric features is critical, yet detecting anomalies in circular components remains underexplored. This study proposes a real-time defect detection framework for metal parts with holes, optimized for deployment on a Raspberry Pi 5 edge device. We fine-tuned and evaluated three deep learning models ResNet50, EfficientNet-B3, and MobileNetV3-Large on a grayscale image dataset (43,482 samples) containing various hole defects and imbalances. Through extensive data augmentation and class-weighting, the models achieved near-perfect binary classification of defective vs. non-defective parts. Notably, ResNet50 attained 99.98% accuracy (precision 0.9994, recall 1.0000), correctly identifying all defects with only one false alarm. MobileNetV3-Large and EfficientNet-B3 likewise exceeded 99.9% accuracy, with slightly more false positives, but offered advantages in model size or interpretability. Gradient-weighted Class Activation Mapping (Grad-CAM) visualizations confirmed that each network focuses on meaningful geometric features (misaligned or irregular holes) when predicting defects, enhancing explainability. These results demonstrate that lightweight CNNs can reliably detect geometric deviations (e.g., mispositioned or missing holes) in real time. The proposed system significantly improves inline quality assurance by enabling timely, accurate, and interpretable defect detection on low-cost hardware, paving the way for smarter manufacturing inspection. Full article
Show Figures

Figure 1

20 pages, 1652 KiB  
Article
Analysis of Spatiotemporal Characteristics of Intercity Travelers Within Urban Agglomeration Based on Trip Chain and K-Prototypes Algorithm
by Shuai Yu, Yuqing Liu and Song Hu
Appl. Syst. Innov. 2025, 8(4), 88; https://doi.org/10.3390/asi8040088 - 26 Jun 2025
Viewed by 367
Abstract
In the rapid process of urbanization, urban agglomerations have become a key driving factor for regional development and spatial reorganization. The formation and development of urban agglomerations rely on communication between cities. However, the spatiotemporal characteristics of intercity travelers are not fully grasped [...] Read more.
In the rapid process of urbanization, urban agglomerations have become a key driving factor for regional development and spatial reorganization. The formation and development of urban agglomerations rely on communication between cities. However, the spatiotemporal characteristics of intercity travelers are not fully grasped throughout the entire trip chain. This study proposes a spatiotemporal analysis method for intercity travel in urban agglomerations by constructing origin-to-destination (OD) trip chains using smartphone data, with the Beijing–Tianjin–Hebei urban agglomeration as a case study. The study employed Cramer’s V and Spearman correlation coefficients for multivariate feature selection, identifying 12 key variables from an initial set of 20. Then, optimal cluster configuration was determined via silhouette analysis. Finally, the K-prototypes algorithm was applied to cluster 161,797 intercity trip chains across six transportation corridors in 2019 and 2021, facilitating a comparative spatiotemporal analysis of travel patterns. Results show the following: (1) Intercity travelers are predominantly males aged 19–35, with significantly higher weekday volumes; (2) Modal split exhibits significant spatial heterogeneity—the metro predominates in Beijing while road transport prevails elsewhere; (3) Departure hubs’ waiting times increased significantly in 2021 relative to 2019 baselines; (4) Increased metro mileage correlates positively with extended intra-city travel distances. The results substantially contribute to transportation planning, particularly in optimizing multimodal hub operations and infrastructure investment allocation. Full article
Show Figures

Figure 1

44 pages, 822 KiB  
Article
Intelligent Active and Reactive Power Management for Wind-Based Distributed Generation in Microgrids via Advanced Metaheuristic Optimization
by Rubén Iván Bolaños, Héctor Pinto Vega, Luis Fernando Grisales-Noreña, Oscar Danilo Montoya and Jesús C. Hernández
Appl. Syst. Innov. 2025, 8(4), 87; https://doi.org/10.3390/asi8040087 - 26 Jun 2025
Viewed by 477
Abstract
This research evaluates the performance of six metaheuristic algorithms in the active and reactive power management of wind turbines (WTs) integrated into an AC microgrid (MG). The population-based genetic algorithm (PGA) is proposed as the primary optimization strategy and is rigorously compared against [...] Read more.
This research evaluates the performance of six metaheuristic algorithms in the active and reactive power management of wind turbines (WTs) integrated into an AC microgrid (MG). The population-based genetic algorithm (PGA) is proposed as the primary optimization strategy and is rigorously compared against five benchmark techniques: Monte Carlo (MC), particle swarm optimization (PSO), the JAYA algorithm, the generalized normal distribution optimizer (GNDO), and the multiverse optimizer (MVO). This study aims to minimize, through independent optimization scenarios, the operating costs, power losses, or CO2 emissions of the microgrid during both grid-connected and islanded modes. To achieve this, a coordinated control strategy for distributed generators is proposed, offering flexible adaptation to economic, technical, or environmental priorities while accounting for the variability of power generation and demand. The proposed optimization model includes active and reactive power constraints for both conventional generators and WTs, along with technical and regulatory limits imposed on the MG, such as current thresholds and nodal voltage boundaries. To validate the proposed strategy, two scenarios are considered: one involving 33 nodes and another one featuring 69. These configurations allow evaluation of the aforementioned optimization strategies under different energy conditions while incorporating the power generation and demand variability corresponding to a specific region of Colombia. The analysis covers two-time horizons (a representative day of operation and a full week) in order to capture both short-term and weekly fluctuations. The variability is modeled via an artificial neural network to forecast renewable generation and demand. Each optimization method undergoes a statistical evaluation based on multiple independent executions, allowing for a comprehensive assessment of its effectiveness in terms of solution quality, average performance, repeatability, and computation time. The proposed methodology exhibits the best performance for the three objectives, with excellent repeatability and computational efficiency across varying microgrid sizes and energy behavior scenarios. Full article
Show Figures

Figure 1

29 pages, 535 KiB  
Review
A Systematic Mapping Study on the Modernization of Legacy Systems to Microservice Architecture
by Lucas Fernando Fávero, Nathalia Rodrigues de Almeida and Frank José Affonso
Appl. Syst. Innov. 2025, 8(4), 86; https://doi.org/10.3390/asi8040086 - 20 Jun 2025
Viewed by 698
Abstract
Microservice architecture (MSA) has garnered attention in various software communities because of its significant advantages. Organizations have also prioritized migrating their legacy systems to MSA, seeking to gather the intrinsic advantages of this architectural style. Despite the importance of this architectural style, there [...] Read more.
Microservice architecture (MSA) has garnered attention in various software communities because of its significant advantages. Organizations have also prioritized migrating their legacy systems to MSA, seeking to gather the intrinsic advantages of this architectural style. Despite the importance of this architectural style, there is a lack of comprehensive studies in the literature on the modernization of legacy systems to MSA. Thus, the principal objective of this article is to present a comprehensive overview of this research theme through a mixed-method investigation composed of a systematic mapping study based on 43 studies and an empirical evaluation by industry practitioners. From these, a taxonomy for the initiatives identified in the literature is established, along with the application domain for which such initiatives were designed, the methods used to evaluate these initiatives, the main quality attributes identified in our investigation, and the main activities employed in the design of such initiatives. As a result, this article delineates a process of modernization based on six macro-activities, designed to facilitate the transition from legacy systems to microservice-based ones. Finally, this article presents a discussion of the results based on the evidence gathered during our investigation, which may serve as a source of inspiration for the design of new initiatives to support software modernization. Full article
Show Figures

Graphical abstract

Previous Issue
Back to TopTop