Previous Issue
Volume 7, October
 
 

Appl. Syst. Innov., Volume 7, Issue 6 (December 2024) – 26 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
19 pages, 4140 KiB  
Article
Innovative Ultrasonic Spray Methods for Indoor Disinfection
by Andrey Shalunov, Olga Kudryashova, Vladimir Khmelev, Dmitry Genne, Sergey Terentiev and Viktor Nesterov
Appl. Syst. Innov. 2024, 7(6), 126; https://doi.org/10.3390/asi7060126 (registering DOI) - 13 Dec 2024
Abstract
This study explores the challenges associated with dispersing disinfectant liquids for sanitizing individuals, indoor spaces, vehicles, and outdoor areas. Among the various approaches, fine aerosol sprays with a high particle surface area emerge as a particularly promising solution. Ultrasonic spraying, which leverages diverse [...] Read more.
This study explores the challenges associated with dispersing disinfectant liquids for sanitizing individuals, indoor spaces, vehicles, and outdoor areas. Among the various approaches, fine aerosol sprays with a high particle surface area emerge as a particularly promising solution. Ultrasonic spraying, which leverages diverse mechanisms of ultrasound interaction with liquids, offers several distinct advantages. Notably, it enables the production of fine aerosols from liquids with a broad range of physical and chemical properties, including variations in purity, viscosity, and surface tension. This capability is especially critical for disinfectant liquids and suspensions, which often exhibit low surface tension and/or high viscosity. The article provides a comprehensive review of ultrasonic spraying methods and technologies developed by the authors’ team in recent years. It highlights innovative ultrasonic sprayers, including the latest designs, which are capable of generating aerosols with precise dispersion characteristics and high productivity from disinfectant liquids. Full article
(This article belongs to the Section Industrial and Manufacturing Engineering)
14 pages, 423 KiB  
Review
Robotics Applications in the Hospital Domain: A Literature Review
by Elijah M. G. N. Vera Cruz, Sancho Oliveira and Américo Correia
Appl. Syst. Innov. 2024, 7(6), 125; https://doi.org/10.3390/asi7060125 - 12 Dec 2024
Viewed by 180
Abstract
Robotic systems are increasingly being used in healthcare. These systems improve patient care both by freeing healthcare professionals from repetitive tasks and by assisting them with complex procedures. This analysis examines the development and implementation of the use of robotic systems in healthcare. [...] Read more.
Robotic systems are increasingly being used in healthcare. These systems improve patient care both by freeing healthcare professionals from repetitive tasks and by assisting them with complex procedures. This analysis examines the development and implementation of the use of robotic systems in healthcare. It also examines the application of artificial intelligence (AI), which focuses on the autonomy of robotic systems, enabling them to perform tasks autonomously. It describes the main areas of use of robots in hospitals, gives examples of the main commercial or research robots, and analyzes the main practical and safety issues associated with the use of these systems. Using the main databases, including PubMed, IEEE Xplore, MDPI, ScienceDirect, ACM Digital Library, BioMed Central, Springer, and others, an extensive search for papers related to the topic was conducted. This resulted in 59 papers being identified as eligible for this review. The article concludes with a discussion of future research areas that will ensure the effective integration of autonomous robotic systems in healthcare. Full article
17 pages, 460 KiB  
Article
ML-Based Pain Recognition Model Using Mixup Data Augmentation
by Raghu M. Shantharam and Friedhelm Schwenker
Appl. Syst. Innov. 2024, 7(6), 124; https://doi.org/10.3390/asi7060124 - 9 Dec 2024
Viewed by 346
Abstract
Machine learning (ML) has revolutionized healthcare by enhancing diagnostic capabilities because of its ability to analyze large datasets and detect minor patterns often overlooked by humans. This is beneficial, especially in pain recognition, where patient communication may be limited. However, ML models often [...] Read more.
Machine learning (ML) has revolutionized healthcare by enhancing diagnostic capabilities because of its ability to analyze large datasets and detect minor patterns often overlooked by humans. This is beneficial, especially in pain recognition, where patient communication may be limited. However, ML models often face challenges such as memorization and sensitivity to adversarial examples. Regularization techniques like mixup, which trains models on convex combinations of data pairs, address these issues by enhancing model generalization. While mixup has proven effective in image, speech, and text datasets, its application to time-series signals like electrodermal activity (EDA) is less explored. This research uses ML for pain recognition with EDA signals from the BioVid Heat Pain Database to distinguish pain by applying mixup regularization to manually extracted EDA features and using a support vector machine (SVM) for classification. The results show that this approach achieves an average accuracy of 75.87% using leave-one-subject-out cross-validation (LOSOCV) compared to 74.61% without mixup. This demonstrates mixup’s efficacy in improving ML model accuracy for pain recognition from EDA signals. This study highlights the potential of mixup in ML as a promising approach to enhance pain assessment in healthcare. Full article
Show Figures

Figure 1

19 pages, 5626 KiB  
Article
Application of Thermography and Convolutional Neural Network to Diagnose Mechanical Faults in Induction Motors and Gearbox Wear
by Emmanuel Resendiz-Ochoa, Omar Trejo-Chavez, Juan J. Saucedo-Dorantes, Luis A. Morales-Hernandez and Irving A. Cruz-Albarran
Appl. Syst. Innov. 2024, 7(6), 123; https://doi.org/10.3390/asi7060123 - 6 Dec 2024
Viewed by 485
Abstract
Nowadays, induction motors and gearboxes play an important role in the industry due to the fact that they are indispensable tools that allow a large number of machines to operate. In this research, a diagnosis method is proposed for the detection of different [...] Read more.
Nowadays, induction motors and gearboxes play an important role in the industry due to the fact that they are indispensable tools that allow a large number of machines to operate. In this research, a diagnosis method is proposed for the detection of different faults in an electromechanical system through infrared thermography and a convolutional neural network (CNN). During the experiment, we tested different conditions in the motor and the gearbox. The induction motor was operated in four conditions, in a healthy state, with one broken bar, a damaged bearing, and misalignment, while the gearbox was operated in three conditions with healthy gears, 50% wear, and 75% wear. The motor failures and gear wear were induced by different machining operations. Data augmentation was then performed using basic transformations such as mirror image and brightness variation. Ablation tests were also carried out, and a convolutional neural network with a basic architecture was proposed; the performance indicators show a precision of 98.53%, accuracy of 98.54%, recall of 98.65%, and F1-Score of 98.55%. The system obtained confirms that through the use of infrared thermography and deep learning, it is possible to identify faults at different points of an electromechanical system. Full article
Show Figures

Figure 1

28 pages, 4043 KiB  
Article
A Novel Optimization Algorithm Inspired by Egyptian Stray Dogs for Solving Multi-Objective Optimal Power Flow Problems
by Mohamed H. ElMessmary, Hatem Y. Diab, Mahmoud Abdelsalam and Mona F. Moussa
Appl. Syst. Innov. 2024, 7(6), 122; https://doi.org/10.3390/asi7060122 - 3 Dec 2024
Viewed by 581
Abstract
One of the most important issues that can significantly affect the electric power network’s ability to operate sustainably is the optimal power flow (OPF) problem. It involves reaching the most efficient operating conditions for the electrical networks while maintaining reliability and systems constraints. [...] Read more.
One of the most important issues that can significantly affect the electric power network’s ability to operate sustainably is the optimal power flow (OPF) problem. It involves reaching the most efficient operating conditions for the electrical networks while maintaining reliability and systems constraints. Solving the OPF problem in transmission networks lowers three critical expenses: operation costs, transmission losses, and voltage drops. The OPF is characterized by the nonlinearity and nonconvexity behavior due to the power flow equations, which define the relationship between power generation, load demand, and network component physical constraints. The solution space for OPF is massive and multimodal, making optimization a challenging concern that calls for advanced mathematics and computational methods. This paper introduces an innovative metaheuristic algorithm, the Egyptian Stray Dog Optimization (ESDO), inspired by the behavior of Egyptian stray dogs and used for solving both single and multi-objective optimal power flow problems concerning the transmission networks. The proposed technique is compared with the particle swarm optimization (PSO), multi-verse optimization (MVO), grasshopper optimization (GOA), and Harris hawk optimization (HHO) and hippopotamus optimization (HO) algorithms through MATLAB simulations by applying them to the IEEE 30-bus system under various operational circumstances. The results obtained indicate that, in comparison to other used algorithms, the suggested technique gives a significantly enhanced performance in solving the OPF problem. Full article
Show Figures

Figure 1

29 pages, 3568 KiB  
Systematic Review
eXplainable Artificial Intelligence in Process Engineering: Promises, Facts, and Current Limitations
by Luigi Piero Di Bonito, Lelio Campanile, Francesco Di Natale, Michele Mastroianni and Mauro Iacono
Appl. Syst. Innov. 2024, 7(6), 121; https://doi.org/10.3390/asi7060121 - 30 Nov 2024
Viewed by 1203
Abstract
Artificial Intelligence (AI) has been swiftly incorporated into the industry to become a part of both customer services and manufacturing operations. To effectively address the ethical issues now being examined by the government, AI models must be explainable in order to be used [...] Read more.
Artificial Intelligence (AI) has been swiftly incorporated into the industry to become a part of both customer services and manufacturing operations. To effectively address the ethical issues now being examined by the government, AI models must be explainable in order to be used in both scientific and societal contexts. The current state of eXplainable artificial intelligence (XAI) in process engineering is examined in this study through a systematic literature review (SLR), with particular attention paid to the technology’s effect, degree of adoption, and potential to improve process and product quality. Due to restricted access to sizable, reliable datasets, XAI research in process engineering is still primarily exploratory or propositional, despite noteworthy applicability in well-known case studies. According to our research, XAI is becoming more and more positioned as a tool for decision support, with a focus on robustness and dependability in process optimization, maintenance, and quality assurance. This study, however, emphasizes that the use of XAI in process engineering is still in its early stages, and there is significant potential for methodological development and wider use across technical domains. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

15 pages, 4088 KiB  
Article
Options for Performing DNN-Based Causal Speech Denoising Using the U-Net Architecture
by Hwai-Tsu Hu and Tung-Tsun Lee
Appl. Syst. Innov. 2024, 7(6), 120; https://doi.org/10.3390/asi7060120 - 29 Nov 2024
Viewed by 451
Abstract
Speech enhancement technology seeks to improve the quality and intelligibility of speech signals degraded by noise, particularly in telephone communications. Recent advancements have focused on leveraging deep neural networks (DNN), especially U-Net architectures, for effective denoising. In this study, we evaluate the performance [...] Read more.
Speech enhancement technology seeks to improve the quality and intelligibility of speech signals degraded by noise, particularly in telephone communications. Recent advancements have focused on leveraging deep neural networks (DNN), especially U-Net architectures, for effective denoising. In this study, we evaluate the performance of a 6-level skip-connected U-Net constructed using either conventional convolution activation blocks (CCAB) or innovative global local former blocks (GLFB) across different processing domains: temporal waveform, short-time Fourier transform (STFT), and short-time discrete cosine transform (STDCT). Our results indicate that the U-Nets can receive better signal-to-noise ratio (SNR) and perceptual evaluation of speech quality (PESQ) when applied in the STFT and STDCT domains, with comparable short-time objective intelligibility (STOI) scores across all domains. Notably, the GLFB-based U-Net outperforms its CCAB counterpart in metrics such as CSIG, CBAK, COVL, and PESQ, while maintaining fewer learnable parameters. Furthermore, we propose domain-specific composite loss functions, considering the acoustic and perceptual characteristics of the spectral domain, to enhance the perceptual quality of denoised speech. Our findings provide valuable insights that can guide the optimization of DNN designs for causal speech denoising. Full article
(This article belongs to the Section Information Systems)
Show Figures

Figure 1

15 pages, 1791 KiB  
Article
Human Identification Based on Electroencephalogram Analysis When Entering a Password Phrase on a Keyboard
by Alexey Sulavko and Alexander Samotuga
Appl. Syst. Innov. 2024, 7(6), 119; https://doi.org/10.3390/asi7060119 - 29 Nov 2024
Viewed by 368
Abstract
The paper proposes a method for identifying a person based on EEG parameters recorded during the process of entering user password phrases on the keyboard. The method is presented in two versions: for a two-channel EEG (frontal leads only) and a six-channel EEG. [...] Read more.
The paper proposes a method for identifying a person based on EEG parameters recorded during the process of entering user password phrases on the keyboard. The method is presented in two versions: for a two-channel EEG (frontal leads only) and a six-channel EEG. A database of EEGs of 95 subjects was formed, who entered a password phrase on the keyboard, including states in an altered psychophysiological state (sleepy and tired). During the experiment, the subjects’ EEG data were recorded. The experiment on collecting data in each state was conducted on different days. The signals were segmented in such a way that the time of entering the password phrase corresponded to the time used during the EEG to identify the subject. The EEG signals are processed using two autoencoders trained on EEG data (on spectrograms of the original signals and their autocorrelation functions). The encoder is used to extract signal features. After identifying the features, identification is performed using the Bayesian classifier. The achieved error level was 0.8% for six-channel EEGs and 1.3% for two-channel EEGs. The advantages of the proposed identification method are that the subject does not need to be put into a state of rest, and no additional stimulation is required. Full article
(This article belongs to the Section Human-Computer Interaction)
Show Figures

Figure 1

30 pages, 5093 KiB  
Article
An Innovative Applied Control System of Helicopter Turboshaft Engines Based on Neuro-Fuzzy Networks
by Serhii Vladov, Oleksii Lytvynov, Victoria Vysotska, Viktor Vasylenko, Petro Pukach and Myroslava Vovk
Appl. Syst. Innov. 2024, 7(6), 118; https://doi.org/10.3390/asi7060118 - 29 Nov 2024
Viewed by 481
Abstract
This study focuses on helicopter turboshaft engine innovative fault-tolerant fuzzy automatic control system development to enhance safety and efficiency in various flight modes. Unlike traditional systems, the proposed automatic control system incorporates a fuzzy regulator with an adaptive control mechanism, allowing for dynamic [...] Read more.
This study focuses on helicopter turboshaft engine innovative fault-tolerant fuzzy automatic control system development to enhance safety and efficiency in various flight modes. Unlike traditional systems, the proposed automatic control system incorporates a fuzzy regulator with an adaptive control mechanism, allowing for dynamic fuel flow and blade pitch angle adjustment based on changing conditions. The scientific novelty lies in the helicopter turboshaft engines distinguishing separate models and the fuel metering unit, significantly improving control accuracy and adaptability to current flight conditions. During experimental research on the TV3-117 engine installed on the Mi-8MTV helicopter, a parametric modeling system was developed to simulate engine operation in real time and interact with higher-level systems. Innovation is evident in the creation of the failure model that accounts for dynamic changes and probabilistic characteristics, enabling the prediction of failures and minimizing their impact on the system. The results demonstrate high effectiveness for the proposed model, achieving an accuracy of 99.455%, while minimizing the loss function, confirming its reliability for practical application in dynamic flight conditions. Full article
Show Figures

Figure 1

16 pages, 29747 KiB  
Article
Identification of Elephant Rumbles in Seismic Infrasonic Signals Using Spectrogram-Based Machine Learning
by Janitha Vidunath, Chamath Shamal, Ravindu Hiroshan, Udani Gamlath, Chamira U. S. Edussooriya and Sudath R. Munasinghe
Appl. Syst. Innov. 2024, 7(6), 117; https://doi.org/10.3390/asi7060117 - 29 Nov 2024
Viewed by 555
Abstract
This paper presents several machine learning methods and highlights the most effective one for detecting elephant rumbles in infrasonic seismic signals. The design and implementation of electronic circuitry to amplify, filter, and digitize the seismic signals captured through geophones are presented. The process [...] Read more.
This paper presents several machine learning methods and highlights the most effective one for detecting elephant rumbles in infrasonic seismic signals. The design and implementation of electronic circuitry to amplify, filter, and digitize the seismic signals captured through geophones are presented. The process converts seismic rumbles to a spectrogram and the existing methods of spectrogram feature extraction and appropriate machine learning algorithms are compared on their merit for automatic seismic rumble identification. A novel method of denoising the spectrum that leads to enhanced accuracy in identifying seismic rumbles is presented. It is experimentally found that the combination of the Mel-frequency cepstral coefficient (MFCC) feature extraction method and the ridge classifier machine learning algorithm give the highest accuracy of 97% in detecting infrasonic elephant rumbles hidden in seismic signals. The trained machine learning algorithm can run quite efficiently on general-purpose embedded hardware such as a Raspberry Pi, hence the method provides a cost-effective and scalable platform to develop a tool to remotely localize elephants, which would help mitigate the human–elephant conflict. Full article
Show Figures

Figure 1

21 pages, 2804 KiB  
Article
Addressing the Interoperability of Electronic Health Records: The Technical and Semantic Interoperability, Preserving Privacy and Security Framework
by Adetunji Ademola, Carlisle George and Glenford Mapp
Appl. Syst. Innov. 2024, 7(6), 116; https://doi.org/10.3390/asi7060116 - 19 Nov 2024
Viewed by 944
Abstract
Interoperability has become crucial in the world of electronic health records, allowing for seamless data exchange and integration across diverse settings. It facilitates the integration of disparate systems, ensures that patient records are accessible, and enhances the care-delivery process. The current interoperability landscape [...] Read more.
Interoperability has become crucial in the world of electronic health records, allowing for seamless data exchange and integration across diverse settings. It facilitates the integration of disparate systems, ensures that patient records are accessible, and enhances the care-delivery process. The current interoperability landscape of electronic health records is saddled with challenges hindering efficient interoperability. Existing interoperability frameworks have not adequately addressed many of the challenges relating to data exchange, security and privacy. To address these challenges, the TASIPPS (Technical and Semantic Interoperability, Preserving Privacy and Security) framework is proposed as a comprehensive approach to achieving efficient interoperability. The TASIPPS framework integrates robust security and privacy measures, providing real-time access to electronic health records that enable precise diagnoses, timely treatment plans and improved patient outcomes. The TASIPPS framework offers a holistic and effective solution to healthcare interoperability challenges. A comparison of the framework with existing frameworks showed that the TASIPPS framework addresses key limitations in privacy, security, and scalability, while providing enhanced interoperability across distinct healthcare systems, positioning it as a more comprehensive solution for modern healthcare needs. Full article
(This article belongs to the Section Medical Informatics and Healthcare Engineering)
Show Figures

Figure 1

18 pages, 13406 KiB  
Article
Trajectory Preview Tracking Control for Self-Balancing Intelligent Motorcycle Utilizing Front-Wheel Steering
by Fei Lai, Hewang Hu and Chaoqun Huang
Appl. Syst. Innov. 2024, 7(6), 115; https://doi.org/10.3390/asi7060115 - 16 Nov 2024
Viewed by 604
Abstract
Known for their compact size, mobility, and off-road capabilities, motorcycles are increasingly used for logistics, emergency rescue, and reconnaissance. However, due to their two-wheeled nature, motorcycles are susceptible to instability, heightening the risk of tipping during cornering. This study includes some research and [...] Read more.
Known for their compact size, mobility, and off-road capabilities, motorcycles are increasingly used for logistics, emergency rescue, and reconnaissance. However, due to their two-wheeled nature, motorcycles are susceptible to instability, heightening the risk of tipping during cornering. This study includes some research and exploration into the following aspects: (1) The design of a front-wheel steering self-balancing controller. It achieves self-balance during motion by adjusting the front-wheel steering angle through manipulation of handlebar torque. (2) Trajectory tracking control based on preview control theory. It establishes a proportional relationship between lateral deviation and lean angle, as determined by path preview. The desired lean angle then serves as input for the self-balancing controller. (3) A pre-braking controller for enhanced active safety. To prevent lateral slide on wet and slippery surfaces, the controller is designed considering the motorcycle’s maximum braking deceleration. These advancements were validated via a joint BikeSim and Matlab/Simulink simulation, which included scenarios such as double lane changes and 60 m-radius turns. The results demonstrate that the intelligent motorcycle equipped with the proposed control algorithm tracks trajectories and maintains stability effectively. Full article
Show Figures

Figure 1

17 pages, 3334 KiB  
Article
Methods of Balancing Technological Systems of Multiproduct Production
by Islam A. Alexandrov, Maxim S. Mikhailov and Leonid M. Chervyakov
Appl. Syst. Innov. 2024, 7(6), 114; https://doi.org/10.3390/asi7060114 - 13 Nov 2024
Viewed by 505
Abstract
The functioning of the machine-building industry has its specifics, particularly periodic changes in the range (size, configuration, and others) of manufactured products. In addition, it is essential to consider the need to reduce the time spent on the production of each unit. Almost [...] Read more.
The functioning of the machine-building industry has its specifics, particularly periodic changes in the range (size, configuration, and others) of manufactured products. In addition, it is essential to consider the need to reduce the time spent on the production of each unit. Almost continuous changes in technology, failures in the supply of raw materials, uncoordinated logistics, and many other factors often cause significant and unproductive costs, leading to an increase in the technological stage. The most promising direction to reduce the technological time of manufacturing products by multiproduct enterprises is to reduce the waiting time owing to the uniform distribution of each technological transition according to the state of the available workshop equipment (plant, production area, enterprise). This study proposes a novel model of technological systems that enables the adaptation of technological processes for part manufacturing and comprises data structures that define their technical capabilities. The proposed algorithm facilitates a reduction in downtime and an increase in equipment utilization factor. It is possible to optimize the technological processes that change the structure of each production operation to adapt to the existing technology. Testing this methodology demonstrated a significant increase of 8% in the process utilization rate of machinery. Full article
Show Figures

Figure 1

24 pages, 12109 KiB  
Article
Case Study of an Integrated Design and Technical Concept for a Scalable Hyperloop System
by Domenik Radeck, Florian Janke, Federico Gatta, João Nicolau, Gabriele Semino, Tim Hofmann, Nils König, Oliver Kleikemper, Felix He-Mao Hsu, Sebastian Rink, Felix Achenbach and Agnes Jocher
Appl. Syst. Innov. 2024, 7(6), 113; https://doi.org/10.3390/asi7060113 - 11 Nov 2024
Viewed by 1034
Abstract
This paper presents the design process and resulting technical concept for an integrated hyperloop system, aimed at realizing efficient high-speed ground transportation. This study integrates various functions into a coherent and technically feasible solution, with key design decisions that optimize performance and cost-efficiency. [...] Read more.
This paper presents the design process and resulting technical concept for an integrated hyperloop system, aimed at realizing efficient high-speed ground transportation. This study integrates various functions into a coherent and technically feasible solution, with key design decisions that optimize performance and cost-efficiency. An iterative design process with domain-specific experts, regular reviews, and a dataset with a single source of truth were employed to ensure continuous and collective progress. The proposed hyperloop system features a maximum speed of 600 kmh and a capacity of 21 passengers per pod (vehicle). It employs air docks for efficient boarding, electromagnetic suspension (EMS) combined with electrodynamic suspension (EDS) for high-speed lane switching, and short stator motor technology for propulsion. Cooling is managed through water evaporation at an operating pressure of 10 mbar, while a 300 kW inductive power supply (IPS) provides onboard power. The design includes a safety system that avoids emergency exits along the track and utilizes separated safety-critical and high-bandwidth communication. With prefabricated concrete parts used for the tube, construction costs can be reduced and scalability improved. A dimensioned cross-sectional drawing, as well as a preliminary pod mass budget and station layout, are provided, highlighting critical technical systems and their interactions. Calculations of energy consumption per passenger kilometer, accounting for all functions, demonstrate a distinct advantage over existing modes of transportation, achieving greater efficiency even at high speeds and with smaller vehicle sizes. This work demonstrates the potential of a well-integrated hyperloop system to significantly enhance transportation efficiency and sustainability, positioning it as a promising extension to existing modes of travel. The findings offer a solid framework for future hyperloop development, encouraging further research, standardization efforts, and public dissemination for continued advancements. Full article
(This article belongs to the Section Control and Systems Engineering)
Show Figures

Figure 1

22 pages, 7697 KiB  
Article
Using IoT for Cistern and Water Tank Level Monitoring
by Miguel A. Wister, Ernesto Leon, Alejandro Alejandro-Carrillo, Pablo Pancardo and Jose A. Hernandez-Nolasco
Appl. Syst. Innov. 2024, 7(6), 112; https://doi.org/10.3390/asi7060112 - 11 Nov 2024
Viewed by 730
Abstract
This paper proposes an experimental design to publish online the measurements obtained from four sensors: one sensor inside a cistern measures the level of drinking water, another sensor in a water tank monitors its level, a third sensor measures water flow or pressure [...] Read more.
This paper proposes an experimental design to publish online the measurements obtained from four sensors: one sensor inside a cistern measures the level of drinking water, another sensor in a water tank monitors its level, a third sensor measures water flow or pressure from the pipes, and a fourth sensor assesses water quality. Several tank filling and emptying tests were performed. Experimental results demonstrated that if the cistern perceived that there was no water in the tank, it turned on the water pump to fill the tank to 100% of its storage capacity; while this was happening, the water level in the cistern and tank, the flow of water from the piped water, and the quality of the water could be visualized on a dashboard. In short, this proposal monitors water levels and flows through the Internet of Things. Data collected by sensors are posted online and stored in a database. Full article
(This article belongs to the Section Information Systems)
Show Figures

Figure 1

16 pages, 22965 KiB  
Article
Materialisation of Complex Interior Spaces for the Insertion and Visualisation of Environmental Data in HBIM Models
by Renan Rolim and Concepción López-González
Appl. Syst. Innov. 2024, 7(6), 111; https://doi.org/10.3390/asi7060111 - 7 Nov 2024
Viewed by 651
Abstract
The Heritage Building Information Modelling (HBIM) methodology and environmental monitoring sensors are among the most widely utilised tools for the digital documentation of heritage buildings and the recording of changes in their environmental conditions. The creation of diverse methodologies for integrating sensor data [...] Read more.
The Heritage Building Information Modelling (HBIM) methodology and environmental monitoring sensors are among the most widely utilised tools for the digital documentation of heritage buildings and the recording of changes in their environmental conditions. The creation of diverse methodologies for integrating sensor data and HBIM models is gradually becoming more prevalent, necessitating the development of alternative approaches to their integrated visualisation and analysis. This paper presents the findings of research conducted with the objective of establishing a 3D modelling process using Autodesk Revit® 2024.1 that allows for more accurate measurement of the interiors of heritage buildings with complex shapes. The interiors are then materialised and prepared to be tagged with informative parameters for 3D visual analysis within the BIM software itself. This process also makes it possible to export the data together with the 3D model to external platforms. To demonstrate the efficacy of this process, the church of the Real Colegio-Seminario de Corpus Christi in Valencia (Colegio del Patriarca), Spain, has been used as a case study. Full article
(This article belongs to the Section Control and Systems Engineering)
Show Figures

Figure 1

30 pages, 459 KiB  
Review
Factors Impacting the Adoption and Acceptance of ChatGPT in Educational Settings: A Narrative Review of Empirical Studies
by Mousa Al-kfairy
Appl. Syst. Innov. 2024, 7(6), 110; https://doi.org/10.3390/asi7060110 - 7 Nov 2024
Viewed by 4631
Abstract
This narrative review synthesizes and analyzes empirical studies on the adoption and acceptance of ChatGPT in higher education, addressing the need to understand the key factors influencing its use by students and educators. Anchored in theoretical frameworks such as the Technology Acceptance Model [...] Read more.
This narrative review synthesizes and analyzes empirical studies on the adoption and acceptance of ChatGPT in higher education, addressing the need to understand the key factors influencing its use by students and educators. Anchored in theoretical frameworks such as the Technology Acceptance Model (TAM), Unified Theory of Acceptance and Use of Technology (UTAUT), Diffusion of Innovation (DoI) Theory, Technology–Organization–Environment (TOE) model, and Theory of Planned Behavior, this review highlights the central constructs shaping adoption behavior. The confirmed factors include hedonic motivation, usability, perceived benefits, system responsiveness, and relative advantage, whereas the effects of social influence, facilitating conditions, privacy, and security vary. Conversely, technology readiness and extrinsic motivation remain unconfirmed as consistent predictors. This study employs a qualitative synthesis of 40 peer-reviewed empirical studies, applying thematic analysis to uncover patterns in the factors driving ChatGPT adoption. The findings reveal that, while the traditional technology adoption models offer valuable insights, a deeper exploration of the contextual and psychological factors is necessary. The study’s implications inform future research directions and institutional strategies for integrating AI to support educational innovation. Full article
Show Figures

Figure 1

33 pages, 733 KiB  
Article
Evaluation of Quality of Innovative E-Learning in Higher Education: An Insight from Poland
by Radosław Wolniak and Kinga Stecuła
Appl. Syst. Innov. 2024, 7(6), 109; https://doi.org/10.3390/asi7060109 - 5 Nov 2024
Viewed by 972
Abstract
The paper presents the results of research on the quality of e-learning in Polish higher education. The authors used an internet questionnaire for the study. The research sample was 621 students. Firstly, the researchers determined 14 variables that are important for the quality [...] Read more.
The paper presents the results of research on the quality of e-learning in Polish higher education. The authors used an internet questionnaire for the study. The research sample was 621 students. Firstly, the researchers determined 14 variables that are important for the quality of e-learning. Then the students evaluated these variables with scores from 1 to 5. The students agreed the most with the following statements: “using the e-learning platform is convenient” (average: 4.20 and median 5.00), and “logging in to the e-learning platform is easy” (average 4.38, median 5.00). Moreover, the authors studied the relation between the quality of e-learning in Polish universities and the following variables: the ease with which the student can acquire content in traditional teaching and e-learning, the student’s knowledge of information technology and their possession of the resources necessary for e-learning, and the student’s assessment of the innovation of e-learning solutions used by the university where the student studies. Full article
Show Figures

Figure 1

16 pages, 4421 KiB  
Article
A Novel Spring-Actuated Low-Velocity Impact Testing Setup
by Mesut Kucuk, Moheldeen Hejazi and Ali Sari
Appl. Syst. Innov. 2024, 7(6), 108; https://doi.org/10.3390/asi7060108 - 31 Oct 2024
Viewed by 653
Abstract
Evaluating the behavior of materials and their response under low-velocity dynamic impact (less than 30 m/s) is a challenging task in various industries. It requires customized test methods to replicate real-world impact scenarios and capture important material responses accurately. This study introduces a [...] Read more.
Evaluating the behavior of materials and their response under low-velocity dynamic impact (less than 30 m/s) is a challenging task in various industries. It requires customized test methods to replicate real-world impact scenarios and capture important material responses accurately. This study introduces a novel spring-actuated testing setup for low-velocity impact (LVI) scenarios, addressing the limitations of existing methods. The setup provides tunable parameters, including adjustable impactor mass (1 to 250 kg), velocity (0.1 to 32 m/s), and spring stiffness (100 N/m to 100 kN/m), allowing for flexible simulation of dynamic impact conditions. Validation experiments on steel plates with a support span of 800 mm and thickness of 5 mm demonstrated the system’s satisfactory accuracy in measuring impact forces (up to 714.2 N), displacements (up to 40.5 mm), and velocities. A calibration procedure is also explored to estimate energy loss using numerical modeling, further enhancing the test setup’s precision and utility. The results underline the effectiveness of the proposed experimental setup in capturing material responses during low-velocity impact events. Full article
Show Figures

Figure 1

15 pages, 905 KiB  
Article
Optimizing Security and Cost Efficiency in N-Level Cascaded Chaotic-Based Secure Communication System
by Talal Bonny and Wafaa Al Nassan
Appl. Syst. Innov. 2024, 7(6), 107; https://doi.org/10.3390/asi7060107 - 31 Oct 2024
Viewed by 698
Abstract
In recent years, chaos-based secure communication systems have garnered significant attention for their unique attributes, including sensitivity to initial conditions and periodic orbit density. However, existing systems face challenges in balancing encryption strength with practical implementation, especially for multiple levels. This paper addresses [...] Read more.
In recent years, chaos-based secure communication systems have garnered significant attention for their unique attributes, including sensitivity to initial conditions and periodic orbit density. However, existing systems face challenges in balancing encryption strength with practical implementation, especially for multiple levels. This paper addresses this gap by introducing a novel N-level cascaded chaotic-based secure communication system for voice encryption, leveraging the four-dimensional unified hyperchaotic system. Performance evaluation is conducted using various security metrics, including Signal-to-Noise Ratio (SNR), Peak Signal-to-Noise Ratio (PSNR), Percent Residual Deviation (PRD), and correlation coefficient, as well as Field-Programmable Gate Array (FPGA) resource metrics. A new Value-Based Performance Metrics (VBPM) framework is also introduced, focusing on both security and efficiency. Simulation results reveal that the system achieves optimal performance at N = 4 levels, demonstrating significant improvements in both security and FPGA resource utilization compared to existing approaches. This research offers a scalable and cost-efficient solution for secure communication systems, with broader implications for real-time encryption in practical applications. Full article
Show Figures

Figure 1

37 pages, 13716 KiB  
Article
Making Tax Smart: Feasibility of Distributed Ledger Technology for Building Tax Compliance Functionality to Central Bank Digital Currency
by Panos Louvieris, Georgios Ioannou and Gareth White
Appl. Syst. Innov. 2024, 7(6), 106; https://doi.org/10.3390/asi7060106 - 29 Oct 2024
Viewed by 913
Abstract
The latest advancements in distributed ledger technology (DLT) and payment architectures such as the UK’s New Payments Architecture present opportunities for leveraging the hidden informational value and intelligence within payments. In this paper, we present Smart Money, an infrastructure capability for a central [...] Read more.
The latest advancements in distributed ledger technology (DLT) and payment architectures such as the UK’s New Payments Architecture present opportunities for leveraging the hidden informational value and intelligence within payments. In this paper, we present Smart Money, an infrastructure capability for a central bank digital currency (CBDC) which enables real-time value-added tax split payments, oversight, controlled access, and smart policy implementation. This capability is implemented as a prototype called Making Tax Smart (MTS), which utilises the open-source R3 Corda DLT framework. The results presented herein confirm that it is feasible to build an MTS capability which is scalable and co-exists with the current payment systems. Smart Money CBDC has the potential to mobilise payments data, transforming the role of money from a blunt instrument to a government policy sensor and actuator without disrupting the existing money system. DLT, smart contracts, and programmable money have a crucial role to play with benefits for government departments, the economy, and society as a whole. Full article
Show Figures

Figure 1

15 pages, 5490 KiB  
Article
Investigation of the Features Influencing the Accuracy of Wind Turbine Power Calculation at Short-Term Intervals
by Pavel V. Matrenin, Dmitry A. Harlashkin, Marina V. Mazunina and Alexandra I. Khalyasmaa
Appl. Syst. Innov. 2024, 7(6), 105; https://doi.org/10.3390/asi7060105 - 29 Oct 2024
Viewed by 635
Abstract
The accurate prediction of wind power generation, as well as the development of a digital twin of a wind turbine, require estimation of the power curve. Actual measurements of generated power, especially over short-term intervals, show that in many cases the power generated [...] Read more.
The accurate prediction of wind power generation, as well as the development of a digital twin of a wind turbine, require estimation of the power curve. Actual measurements of generated power, especially over short-term intervals, show that in many cases the power generated differs from the calculated power, which considers only the wind speed and the technical parameters of the wind turbine. Some of these measurements are erroneous, while others are influenced by additional factors affecting generation beyond wind speed alone. This study presents an investigation of the features influencing the accuracy of calculations of wind turbine power at short-term intervals. The open dataset of SCADA-system measurements from a real wind turbine is used. It is discovered that using ensemble machine learning models and additional features, including the actual power from the previous time step, enhances the accuracy of the wind power calculation. The root-mean-square error achieved is 113 kW, with the nominal capacity of the wind turbine under consideration being 3.6 MW. Consequently, the ratio of the root-mean-square error to the nominal capacity is 3%. Full article
(This article belongs to the Special Issue Wind Energy and Wind Turbine System)
Show Figures

Figure 1

19 pages, 5332 KiB  
Article
Enhancing Industrial Valve Diagnostics: Comparison of Two Preprocessing Methods on the Performance of a Stiction Detection Method Using an Artificial Neural Network
by Bhagya Rajesh Navada, Vemulapalli Sravani and Santhosh Krishnan Venkata
Appl. Syst. Innov. 2024, 7(6), 104; https://doi.org/10.3390/asi7060104 - 29 Oct 2024
Viewed by 724
Abstract
The detection and mitigation of stiction are crucial for maintaining control system performance. This paper proposes the comparison of two preprocessing methods for detecting stiction in control valves via pattern recognition via an artificial neural network (ANN). This method utilizes process variables (PVs) [...] Read more.
The detection and mitigation of stiction are crucial for maintaining control system performance. This paper proposes the comparison of two preprocessing methods for detecting stiction in control valves via pattern recognition via an artificial neural network (ANN). This method utilizes process variables (PVs) and controller outputs (OPs) to accurately identify stiction within control loops. The ANN was comprehensively trained using data from a data-driven model after processing them. Validation and testing were conducted with real industrial data from the International Stiction Database (ISDB), ensuring a practical assessment framework. This study evaluated the impact of two preprocessing methods on fault detection accuracy, namely, the D-value and principal component analysis (PCA) methods, where the D-value method achieved a commendable overall accuracy of 76%, with 86% precision in stiction prediction and a 66% success rate in nonstiction scenarios. This signifies that feature reduction leads to a degraded stiction detection. The data-driven model was implemented in SIMULINK, and the ANN was trained in MATLAB with the Pattern Recognition Toolbox. These promising results highlight the method’s reliability in diagnosing stiction in industrial settings. Integrating this technique into existing control systems is expected to enhance maintenance protocols, reduce operational downtime, and improve efficiency. Future research should aim to expand this method’s applicability to a wider range of control systems and operational conditions, further solidifying its industrial value. Full article
Show Figures

Figure 1

10 pages, 2470 KiB  
Article
Improving Workplace Safety and Health Through a Rapid Ergonomic Risk Assessment Methodology Enhanced by an Artificial Intelligence System
by Adrian Ispășoiu, Ioan Milosan and Camelia Gabor
Appl. Syst. Innov. 2024, 7(6), 103; https://doi.org/10.3390/asi7060103 - 28 Oct 2024
Viewed by 747
Abstract
The comfort of a worker while performing any activity is extremely important. If that activity extends beyond a person’s capacity to withstand physical and psychological stress, the worker may suffer from both physical and mental ailments. Over time, if the stress persists, these [...] Read more.
The comfort of a worker while performing any activity is extremely important. If that activity extends beyond a person’s capacity to withstand physical and psychological stress, the worker may suffer from both physical and mental ailments. Over time, if the stress persists, these conditions can become chronic diseases and can even be the cause of workplace accidents. In this research, a methodology was developed for the rapid assessment of ergonomic risks and for calculating the level of ergonomic comfort in the workplace. This methodology uses artificial intelligence through a specific algorithm and takes into account a number of factors that, when combined, can have a significant impact on workers. To achieve a more accurate simulation of a work situation or to evaluate an ongoing work situation, and to significantly correlate these parameters, we used logarithmic calculation formulas. To streamline the process, we developed software that performs these calculations, conducts a rapid assessment of ergonomic risks, estimates a comfort level, and proposes possible measures to mitigate the risks and effects on workers. To assist in diagnosing the work situation, we used a neural network with five neurons in the input layer, one hidden layer, and two neurons in the output layer. As a result, most work situations, in any industrial field, can be quickly analyzed and evaluated using this methodology. The use of this new analysis and diagnosis tool, implemented through this new research technology, is beneficial for employers and workers. Moreover, through further developments of this methodology, achieved by increasing the number of relevant input parameters for ergonomics and integrating advanced artificial intelligence systems, we aim to provide high precision in assessing ergonomic risk and calculating the level of ergonomic comfort. Full article
Show Figures

Figure 1

17 pages, 362 KiB  
Article
Low-Complexity SAOR and Conjugate Gradient Accelerated SAOR Based Signal Detectors for Massive MIMO Systems
by Imran A. Khoso, Mazhar Ali, Muhammad Nauman Irshad, Sushank Chaudhary, Pisit Vanichchanunt and Lunchakorn Wuttisittikulkij
Appl. Syst. Innov. 2024, 7(6), 102; https://doi.org/10.3390/asi7060102 - 24 Oct 2024
Viewed by 797
Abstract
A major challenge for massive multiple-input multiple-output (MIMO) technology is designing an efficient signal detector. The conventional linear minimum mean square error (MMSE) detector is capable of achieving good performance in large antenna systems but requires computing the matrix inverse, which has very [...] Read more.
A major challenge for massive multiple-input multiple-output (MIMO) technology is designing an efficient signal detector. The conventional linear minimum mean square error (MMSE) detector is capable of achieving good performance in large antenna systems but requires computing the matrix inverse, which has very high complexity. To address this problem, several iterative signal detection methods have recently been introduced. Existing iterative detectors perform poorly, especially as the system dimensions increase. This paper proposes two detection schemes aimed at reducing computational complexity in massive MIMO systems. The first method leverages the symmetric accelerated over-relaxation (SAOR) technique, which enhances convergence speed by judiciously selecting the relaxation and acceleration parameters. The SAOR technique offers a significant advantage over conventional accelerated over-relaxation methods due to its symmetric iteration. This symmetry enables the use of the conjugate gradient (CG) acceleration approach. Based on this foundation, we propose a novel accelerated SAOR method named CGA-SAOR, where CG acceleration is applied to further enhance the convergence rate. This combined approach significantly enhances performance compared to the SAOR method. In addition, a detailed analysis of the complexity and numerical results is provided to demonstrate the effectiveness of the proposed algorithms. The results illustrate that our algorithms achieve near-MMSE detection performance while reducing computations by an order of magnitude and significantly outperform recently introduced iterative detectors. Full article
(This article belongs to the Section Information Systems)
Show Figures

Figure 1

15 pages, 2257 KiB  
Article
Deep Learning-Based Flap Detection System Using Thermographic Images in Plastic Surgery
by Răzvan Danciu, Bogdan Andrei Danciu, Luiz-Sorin Vasiu, Adelaida Avino, Claudiu Ioan Filip, Cristian-Sorin Hariga, Laura Răducu and Radu-Cristian Jecan
Appl. Syst. Innov. 2024, 7(6), 101; https://doi.org/10.3390/asi7060101 - 22 Oct 2024
Viewed by 1165
Abstract
In reconstructive surgery, flaps are the cornerstone for repairing tissue defects, but postoperative monitoring of their viability remains a challenge. Among the imagistic techniques for monitoring flaps, the thermal camera has demonstrated its value as an efficient indirect method that is easy to [...] Read more.
In reconstructive surgery, flaps are the cornerstone for repairing tissue defects, but postoperative monitoring of their viability remains a challenge. Among the imagistic techniques for monitoring flaps, the thermal camera has demonstrated its value as an efficient indirect method that is easy to use and easy to integrate into clinical practice. This provides a narrow color spectrum image that is amenable to the development of an artificial neural network in the context of current technological progress. In the present study, we introduce a novel attention-enhanced recurrent residual U-Net (AER2U-Net) model that is able to accurately segment flaps on thermographic images. This model was trained on a uniquely generated database of thermographic images obtained by monitoring 40 patients who required flap surgery. We compared the proposed AER2U-Net with several state-of-the-art neural networks used for multi-modal segmentation of medical images, all of which are based on the U-Net architecture (U-Net, R2U-Net, AttU-Net). Experimental results demonstrate that our model (AER2U-Net) achieves significantly better performance on our unique dataset compared to these existing U-Net variants, showing an accuracy of 0.87. This deep learning-based algorithm offers a non-invasive and precise method to monitor flap vitality and detect postoperative complications early, with further refinement needed to enhance its clinical applicability and effectiveness. Full article
Show Figures

Figure 1

Previous Issue
Back to TopTop