Next Issue
Volume 11, May
Previous Issue
Volume 11, March
 
 

Computers, Volume 11, Issue 4 (April 2022) – 10 articles

Cover Story (view full-size image): Despite numerous attempts to introduce mixed reality in education, there are still only a few real applications in schools and universities. One of the reasons for this is teachers’ resistance to introducing new practices in their consolidated processes. The epidemiological emergency due to SARS-CoV-2 forced schools and universities to redefine rapidly consolidated processes toward forms of distance education. However, distance education does not allow hands-on practical activities during STEM laboratory lectures. In this study, we propose using mixed reality to support laboratory lectures in STEM distance education. We designed and evaluated a mixed reality application, following a methodology extendable to diverse STEM laboratory lectures. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
7 pages, 229 KiB  
Article
A Lite Romanian BERT: ALR-BERT
by Dragoş Constantin Nicolae, Rohan Kumar Yadav and Dan Tufiş
Computers 2022, 11(4), 57; https://doi.org/10.3390/computers11040057 - 15 Apr 2022
Cited by 4 | Viewed by 2879
Abstract
Large-scale pre-trained language representation and its promising performance in various downstream applications have become an area of interest in the field of natural language processing (NLP). There has been huge interest in further increasing the model’s size in order to outperform the best [...] Read more.
Large-scale pre-trained language representation and its promising performance in various downstream applications have become an area of interest in the field of natural language processing (NLP). There has been huge interest in further increasing the model’s size in order to outperform the best previously obtained performances. However, at some point, increasing the model’s parameters may lead to reaching its saturation point due to the limited capacity of GPU/TPU. In addition to this, such models are mostly available in English or a shared multilingual structure. Hence, in this paper, we propose a lite BERT trained on a large corpus solely in the Romanian language, which we called “A Lite Romanian BERT (ALR-BERT)”. Based on comprehensive empirical results, ALR-BERT produces models that scale far better than the original Romanian BERT. Alongside presenting the performance on downstream tasks, we detail the analysis of the training process and its parameters. We also intend to distribute our code and model as an open source together with the downstream task. Full article
Show Figures

Figure 1

20 pages, 5464 KiB  
Article
Osmotic Message-Oriented Middleware for Internet of Things
by Islam Gamal, Hala Abdel-Galil and Atef Ghalwash
Computers 2022, 11(4), 56; https://doi.org/10.3390/computers11040056 - 15 Apr 2022
Cited by 5 | Viewed by 3230
Abstract
IoT is a trending computational concept that converts almost everything in modern life into a smart thing in various innovational and outstanding approaches. Smart homes, connected cities, autonomous vehicles, industrial automation, and smart healthcare that allows doctors to perform a patient examination and [...] Read more.
IoT is a trending computational concept that converts almost everything in modern life into a smart thing in various innovational and outstanding approaches. Smart homes, connected cities, autonomous vehicles, industrial automation, and smart healthcare that allows doctors to perform a patient examination and enable executing a remote surgery are now applicable through the smart connected things. Moreover, the recent IoT analytics report expects the universal number of connected IoT things to grow by 9%, to 12.3 billion operating terminals. It is expected that there will be more than 27 billion IoT live connections by 2025. In this paper, we present osmotic message-oriented middleware, introducing an end-to-end IoT platform to federate the dynamic orchestration process of resources across different heterogeneous types of devices belonging to physical and virtual infrastructures (e.g., edge, fog, and cloud layers); the orchestration process follows the osmotic computing concepts represented as the self-adaptive MAPE-K model, which maintains/adopts itself on the runtime through feedback loops from the provisioning engine, which collects the node’s hardware and software performance matrices. Accordingly, the orchestration process utilizes the optimized dynamic Hungarian algorithm to solve the MELs’ assignment problem based on the vibrant runtime provisioning data. The implemented middleware prototype is tested on both simulated and real-life environments to validate the architecture hypothesis of running an efficient, robust, elastic, and cost-efficient end-to-end osmotic IoT ecosystem, which unlocks a new implementation model for the IoT numerous domains. Full article
(This article belongs to the Section Internet of Things (IoT) and Industrial IoT)
Show Figures

Figure 1

19 pages, 373 KiB  
Article
Application of the Hurricane Optimization Algorithm to Estimate Parameters in Single-Phase Transformers Considering Voltage and Current Measures
by Brandon Cortés-Caicedo, Oscar Danilo Montoya and Andrés Arias-Londoño
Computers 2022, 11(4), 55; https://doi.org/10.3390/computers11040055 - 11 Apr 2022
Cited by 6 | Viewed by 2462
Abstract
In this research paper, a combinatorial optimization approach is proposed for parameter estimation in single-phase transformers considering voltage and current measurements at the transformer terminals. This problem is represented through a nonlinear programming model (NLP), whose objective is to minimize the root mean [...] Read more.
In this research paper, a combinatorial optimization approach is proposed for parameter estimation in single-phase transformers considering voltage and current measurements at the transformer terminals. This problem is represented through a nonlinear programming model (NLP), whose objective is to minimize the root mean square error between the measured voltage and current values and the calculated values from the equivalent model of the single-phase transformer. These values of voltage and current can be determined by applying Kirchhoff’s Laws to the model T of the transformer, where its parameters, series resistance and reactance as well as the magnetization resistance and reactance, i.e., R1, R2, X1, X2, Rc y Xm, are provided by the Hurricane Optimization Algorithm (HOA). The numerical results in the 4 kVA, 10 kVA and 15 kVA single-phase test transformers demonstrate the applicability of the proposed method since it allows the reduction of the average error between the measured and calculated electrical variables by 1000% compared to the methods reported in the specialized literature. This ensures that the parameters estimated by the proposed methodology, in each test transformer, are close to the real value with an accuracy error of less than 6%. Additionally, the computation times required by the algorithm to find the optimal solution are less than 1 second, which makes the proposed HOA robust, reliable, and efficient. All simulations were performed in the MATLAB programming environment. Full article
(This article belongs to the Special Issue Computing, Electrical and Industrial Systems 2022)
Show Figures

Figure 1

15 pages, 475 KiB  
Article
Isolation Forests and Deep Autoencoders for Industrial Screw Tightening Anomaly Detection
by Diogo Ribeiro, Luís Miguel Matos, Guilherme Moreira, André Pilastri and Paulo Cortez
Computers 2022, 11(4), 54; https://doi.org/10.3390/computers11040054 - 8 Apr 2022
Cited by 13 | Viewed by 6041
Abstract
Within the context of Industry 4.0, quality assessment procedures using data-driven techniques are becoming more critical due to the generation of massive amounts of production data. In this paper, we address the detection of abnormal screw tightening processes, which is a key industrial [...] Read more.
Within the context of Industry 4.0, quality assessment procedures using data-driven techniques are becoming more critical due to the generation of massive amounts of production data. In this paper, we address the detection of abnormal screw tightening processes, which is a key industrial task. Since labeling is costly, requiring a manual effort, we focus on unsupervised detection approaches. In particular, we assume a computationally light low-dimensional problem formulation based on angle–torque pairs. Our work is focused on two unsupervised machine learning (ML) algorithms: isolation forest (IForest) and a deep learning autoencoder (AE). Several computational experiments were held by assuming distinct datasets and a realistic rolling window evaluation procedure. First, we compared the two ML algorithms with two other methods, a local outlier factor method and a supervised Random Forest, on older data related with two production days collected in November 2020. Since competitive results were obtained, during a second stage, we further compared the AE and IForest methods by adopting a more recent and larger dataset (from February to March 2021, totaling 26.9 million observations and related to three distinct assembled products). Both anomaly detection methods obtained an excellent quality class discrimination (higher than 90%) under a realistic rolling window with several training and testing updates. Turning to the computational effort, the AE is much lighter than the IForest for training (around 2.7 times faster) and inference (requiring 3.0 times less computation). This AE property is valuable within this industrial domain since it tends to generate big data. Finally, using the anomaly detection estimates, we developed an interactive visualization tool that provides explainable artificial intelligence (XAI) knowledge for the human operators, helping them to better identify the angle–torque regions associated with screw tightening failures. Full article
(This article belongs to the Special Issue Selected Papers from ICCSA 2021)
Show Figures

Figure 1

22 pages, 422 KiB  
Article
Optimal Allocation and Sizing of PV Generation Units in Distribution Networks via the Generalized Normal Distribution Optimization Approach
by Oscar Danilo Montoya, Luis Fernando Grisales-Noreña and Carlos Andres Ramos-Paja
Computers 2022, 11(4), 53; https://doi.org/10.3390/computers11040053 - 31 Mar 2022
Cited by 9 | Viewed by 2912
Abstract
The problem of optimal siting and dimensioning of photovoltaic (PV) generators in medium-voltage distribution networks is addressed in this research from the perspective of combinatorial optimization. The exact mixed-integer programming (MINLP) model is solved using a master–slave (MS) optimization approach. In the master [...] Read more.
The problem of optimal siting and dimensioning of photovoltaic (PV) generators in medium-voltage distribution networks is addressed in this research from the perspective of combinatorial optimization. The exact mixed-integer programming (MINLP) model is solved using a master–slave (MS) optimization approach. In the master stage, the generalized normal distribution optimization (GNDO) with a discrete–continuous codification is used to represent the locations and sizes of the PV generators. In the slave stage, the generalization of the backward/forward power method, known as the successive approximation power flow method, is adopted. Numerical simulations in the IEEE 33-bus and 69-bus systems demonstrated that the GNDO approach is the most efficient method for solving the exact MINLP model, as it obtained better results than the genetic algorithm, vortex-search algorithm, Newton-metaheuristic optimizer, and exact solution using the General Algebraic Modeling System (GAMS) software with the BONMIN solver. Simulations showed that, on average, the proposed MS optimizer reduced the total annual operative costs by approximately 27% for both test feeders when compared with the reference case. In addition, variations in renewable generation availability showed that from 30% ahead, positive reductions with respect to the reference case were obtained. Full article
(This article belongs to the Special Issue Computing, Electrical and Industrial Systems 2022)
Show Figures

Figure 1

18 pages, 674 KiB  
Article
Spatial Impressions Monitoring during COVID-19 Pandemic Using Machine Learning Techniques
by Talal H. Noor, Abdulqader Almars, Ibrahim Gad, El-Sayed Atlam and Mahmoud Elmezain
Computers 2022, 11(4), 52; https://doi.org/10.3390/computers11040052 - 29 Mar 2022
Cited by 9 | Viewed by 2658
Abstract
During the COVID-19 epidemic, Twitter has become a vital platform for people to express their impressions and feelings towards the COVID-19 epidemic. There is an unavoidable need to examine various patterns on social media platforms in order to reduce public anxiety and misconceptions. [...] Read more.
During the COVID-19 epidemic, Twitter has become a vital platform for people to express their impressions and feelings towards the COVID-19 epidemic. There is an unavoidable need to examine various patterns on social media platforms in order to reduce public anxiety and misconceptions. Based on this study, various public service messages can be disseminated, and necessary steps can be taken to manage the scourge. There has already been a lot of work conducted in several languages, but little has been conducted on Arabic tweets. The primary goal of this study is to analyze Arabic tweets about COVID-19 and extract people’s impressions of different locations. This analysis will provide some insights into understanding public mood variation on Twitter, which could be useful for governments to identify the effect of COVID-19 over space and make decisions based on that understanding. To achieve that, two strategies are used to analyze people’s impressions from Twitter: machine learning approach and the deep learning approach. To conduct this study, we scraped Arabic tweets up with 12,000 tweets that were manually labeled and classify them as positive, neutral or negative feelings. Specialising in Saudi Arabia, the collected dataset consists of 2174 positive tweets and 2879 negative tweets. First, TF-IDF feature vectors are used for feature representation. Then, several models are implemented to identify people’s impression over time using Twitter Geo-tag information. Finally, Geographic Information Systems (GIS) are used to map the spatial distribution of people’s emotions and impressions. Experimental results show that SVC outperforms other methods in terms of performance and accuracy. Full article
Show Figures

Figure 1

15 pages, 706 KiB  
Review
A Systematic Survey on Cybersickness in Virtual Environments
by Ananth N. Ramaseri Chandra, Fatima El Jamiy and Hassan Reza
Computers 2022, 11(4), 51; https://doi.org/10.3390/computers11040051 - 29 Mar 2022
Cited by 53 | Viewed by 6980
Abstract
Virtual reality (VR) is an emerging technology with a broad range of applications in training, entertainment, and business. To maximize the potentials of virtual reality as a medium, the unwelcome feeling of cybersickness needs to be minimized. Cybersickness is a type of simulation [...] Read more.
Virtual reality (VR) is an emerging technology with a broad range of applications in training, entertainment, and business. To maximize the potentials of virtual reality as a medium, the unwelcome feeling of cybersickness needs to be minimized. Cybersickness is a type of simulation sickness that is experienced in virtual reality. It is a significant challenge for the usability of virtual reality systems. Even with advancements in virtual reality, the usability concerns are barriers for a wide-spread acceptance. Several factors (hardware, software, human) play a part towards a pleasant virtual reality experience. In this paper, we review the potential factors which cause sickness and minimize the usability of virtual reality systems. The reviewed scientific articles are mostly part of documents indexed in digital libraries. We review the best practices from a developer’s perspective and some of the safety measures a user must follow while using the virtual reality systems from existing research. Even after following some of the guidelines and best practices virtual reality environments do not guarantee a pleasant experience for users. Limited research in virtual reality environments towards requirements specification, design, and development for maximum usability and adaptability was the main motive for this work. Full article
Show Figures

Graphical abstract

24 pages, 5802 KiB  
Article
Design of a Mixed Reality Application for STEM Distance Education Laboratories
by Michele Gattullo, Enricoandrea Laviola, Antonio Boccaccio, Alessandro Evangelista, Michele Fiorentino, Vito Modesto Manghisi and Antonio Emmanuele Uva
Computers 2022, 11(4), 50; https://doi.org/10.3390/computers11040050 - 24 Mar 2022
Cited by 16 | Viewed by 3873
Abstract
In this work, we propose a Mixed Reality (MR) application to support laboratory lectures in STEM distance education. It was designed following a methodology extendable to diverse STEM laboratory lectures. We formulated this methodology considering the main issues found in the literature that [...] Read more.
In this work, we propose a Mixed Reality (MR) application to support laboratory lectures in STEM distance education. It was designed following a methodology extendable to diverse STEM laboratory lectures. We formulated this methodology considering the main issues found in the literature that limit MR’s use in education. Thus, the main design features of the resulting MR application are students’ and teachers’ involvement, use of not distracting graphics, integration of traditional didactic material, and easy scalability to new learning activities. In this work, we present how we applied the design methodology and used the framework for the case study of an engineering course to support students in understanding drawings of complex machines without being physically in the laboratory. We finally evaluated the usability and cognitive load of the implemented MR application through two user studies, involving, respectively, 48 and 36 students. The results reveal that the usability of our application is “excellent” (mean SUS score 84.7), and it is not influenced by familiarity with Mixed Reality and distance education tools. Furthermore, the cognitive load is medium (mean NASA TLX score below 29) for all four learning tasks that students can accomplish through the MR application. Full article
(This article belongs to the Special Issue Xtended or Mixed Reality (AR+VR) for Education)
Show Figures

Graphical abstract

18 pages, 2068 KiB  
Article
Attention Classification Based on Biosignals during Standard Cognitive Tasks for Occupational Domains
by Patricia Gamboa, Rui Varandas, João Rodrigues, Cátia Cepeda, Cláudia Quaresma and Hugo Gamboa
Computers 2022, 11(4), 49; https://doi.org/10.3390/computers11040049 - 24 Mar 2022
Cited by 6 | Viewed by 3697
Abstract
Occupational disorders considerably impact workers’ quality of life and organizational productivity, and even affect mortality worldwide. Such health issues are related to mental health and ergonomics risk factors. In particular, mental health may be affected by cognitive strain caused by unexpected interruptions and [...] Read more.
Occupational disorders considerably impact workers’ quality of life and organizational productivity, and even affect mortality worldwide. Such health issues are related to mental health and ergonomics risk factors. In particular, mental health may be affected by cognitive strain caused by unexpected interruptions and other attention compromising factors. Risk factors assessment associated with cognitive strain in office environments, namely related to attention states, still suffers from the lack of scientifically validated tools. In this work, we aim to develop a series of classification models that can classify attention during pre-defined cognitive tasks based on the acquisition of biosignals to create a ground truth of attention. Biosignals, such as electrocardiography, electroencephalography, and functional near-infrared spectroscopy, were acquired from eight subjects during standard cognitive tasks inducing attention. Individually tuned machine learning models trained with those biosignals allowed us to successfully detect attention on the individual level, with results in the range of 70–80%. The electroencephalogram and electrocardiogram were revealed to be the most appropriate sensors in this context, and the combination of multiple sensors demonstrated the importance of using multiple sources. These models prove to be relevant for the development of attention identification tools by providing ground truth to determine which human–computer interaction variables have strong associations with attention. Full article
(This article belongs to the Special Issue Computing, Electrical and Industrial Systems 2021)
Show Figures

Figure 1

19 pages, 735 KiB  
Article
Multi-Period Optimal Reactive Power Dispatch Using a Mean-Variance Mapping Optimization Algorithm
by Daniel C. Londoño Tamayo, Walter M. Villa-Acevedo and Jesús M. López-Lezama
Computers 2022, 11(4), 48; https://doi.org/10.3390/computers11040048 - 22 Mar 2022
Cited by 3 | Viewed by 2563
Abstract
Optimal reactive power dispatch plays a key role in the safe operation of electric power systems. It consists of the optimal management of the reactive power sources within the system, usually with the aim of reducing system power losses. This paper presents both [...] Read more.
Optimal reactive power dispatch plays a key role in the safe operation of electric power systems. It consists of the optimal management of the reactive power sources within the system, usually with the aim of reducing system power losses. This paper presents both a new model and a solution approach for the multi-period version of the optimal reactive power dispatch. The main feature of a multi-period approach lies on the incorporation of inter-temporal constraints that allow the number of switching operations in transformer taps and capacitor banks to be limited in order to preserve their lifetime and avoid maintenance cost overruns. The main contribution of the paper is the constraint handling approach which consists of a multiplication of sub-functions which act as penalization and allow simultaneous consideration of both the feasibility and optimality of a given candidate solution. The multi-period optimal reactive power dispatch is an inherently nonconvex and nonlinear problem; therefore, it was solved using the metaheuristic mean-variance mapping optimization algorithm. The IEEE 30-bus and IEEE 57-bus test systems were used to validate the model and solution approach. The results allow concluding that the proposed model guarantees an adequate reactive power management that meets the objective of minimizing power losses and keeping the transformer taps and capacitor bank movements within limits that allow guaranteeing their useful life. Full article
(This article belongs to the Special Issue Computing, Electrical and Industrial Systems 2022)
Show Figures

Graphical abstract

Previous Issue
Next Issue
Back to TopTop