Next Issue
Volume 11, July
Previous Issue
Volume 11, May
 
 

Computers, Volume 11, Issue 6 (June 2022) – 17 articles

Cover Story (view full-size image): Simulators allow the easy setup and test of different scenarios that would otherwise be financially costly and difficult to implement on a technical level in a real testbed. Thus, developing specific use cases in a simulation environment is a suitable solution, especially with edge computing (EC) architectures where the number of devices can be considerable. Can we trust the simulations, however? How accurate are these tools? To answer these questions, we implemented the EdgeBench benchmark in the real world and using FogComputingSim. We compared several execution metrics, and overall, the simulated environment successfully reproduced the real-world results, thus allowing us to state that we can trust EC simulations in the first approaches to problems. However, these do not fully replace a real-world implementation. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
15 pages, 4055 KiB  
Article
Computer Vision-Based Inspection System for Worker Training in Build and Construction Industry
by M. Fikret Ercan and Ricky Ben Wang
Computers 2022, 11(6), 100; https://doi.org/10.3390/computers11060100 - 20 Jun 2022
Cited by 2 | Viewed by 2848
Abstract
Recently computer vision has been applied in various fields of engineering successfully ranging from manufacturing to autonomous cars. A key player in this development is the achievements of the latest object detection and classification architectures. In this study, we utilized computer vision and [...] Read more.
Recently computer vision has been applied in various fields of engineering successfully ranging from manufacturing to autonomous cars. A key player in this development is the achievements of the latest object detection and classification architectures. In this study, we utilized computer vision and the latest object detection techniques for an automated assessment system. It is developed to reduce the person-hours involved in worker training assessment. In our local building and construction industry, workers are required to be certificated for their technical skills in order to qualify working in this industry. For the qualification, they are required to go through a training and assessment process. During the assessment, trainees implement an assembly such as electrical wiring and wall-trunking by referring to technical drawings provided. Trainees’ work quality and correctness are then examined by a team of experts manually and visually, which is a time-consuming process. The system described in this paper aims to automate the assessment process to reduce the significant person-hours required during the assessment. We employed computer vision techniques to measure the dimensions, orientation, and position of the wall assembly produced hence speeding up the assessment process. A number of key parts and components are analyzed and their discrepancies from the technical drawing are reported as the assessment result. The performance of the developed system depends on the accurate detection of the wall assembly objects and their corner points. Corner points are used as reference points for the measurements, considering the shape of objects, in this particular application. However, conventional corner detection algorithms are founded upon pixel-based operations and they return many redundant or false corner points. In this study, we employed a hybrid approach using deep learning and conventional corner detection algorithms. Deep learning is employed to detect the whereabouts of objects as well as their reference corner points in the image. We then perform a search within these locations for potential corner points returned from the conventional corner detector algorithm. This approach resulted in highly accurate detection of reference points for measurements and evaluation of the assembly. Full article
(This article belongs to the Special Issue Selected Papers from ICCSA 2021)
Show Figures

Figure 1

21 pages, 1666 KiB  
Article
Building DeFi Applications Using Cross-Blockchain Interaction on the Wish Swap Platform
by Rita Tsepeleva and Vladimir Korkhov
Computers 2022, 11(6), 99; https://doi.org/10.3390/computers11060099 - 16 Jun 2022
Cited by 9 | Viewed by 3955
Abstract
Blockchain is a developing technology that can provide users with such advantages as decentralization, data security, and transparency of transactions. Blockchain has many applications, one of them is the decentralized finance (DeFi) industry. DeFi is a huge aggregator of various financial blockchain protocols. [...] Read more.
Blockchain is a developing technology that can provide users with such advantages as decentralization, data security, and transparency of transactions. Blockchain has many applications, one of them is the decentralized finance (DeFi) industry. DeFi is a huge aggregator of various financial blockchain protocols. At the moment, the total value locked in these protocols reaches USD 82 billion. Every day more and more new users come to DeFi with their investments. The concept of decentralized finance involves the creation of a single ecosystem of many blockchains that interact with each other. The problem of combining and interacting blockchains becomes crucial to enable DeFi. In this paper, we look at the essence of the DeFi industry, the possibilities of overcoming the problem of cross-blockchain interaction, present our approach to solving this problem with the Wish Swap platform, which, in particular, provides improved fault-tolerance for cross-chain interaction by using multiple backend nodes and multisignatures. We analyze the results of the proposed solution and demonstrate how a prototype pre-sale application can be created based on the proposed concept. Full article
(This article belongs to the Special Issue Selected Papers from ICCSA 2021)
Show Figures

Figure 1

21 pages, 511 KiB  
Article
A Light Signaling Approach to Node Grouping for Massive MIMO IoT Networks
by Emma Fitzgerald, Michał Pióro, Harsh Tataria, Gilles Callebaut, Sara Gunnarsson and Liesbet Van der Perre
Computers 2022, 11(6), 98; https://doi.org/10.3390/computers11060098 - 16 Jun 2022
Viewed by 1956
Abstract
Massive MIMO is one of the leading technologies for connecting very large numbers of energy-constrained nodes, as it offers both extensive spatial multiplexing and large array gain. A challenge resides in partitioning the many nodes into groups that can communicate simultaneously such that [...] Read more.
Massive MIMO is one of the leading technologies for connecting very large numbers of energy-constrained nodes, as it offers both extensive spatial multiplexing and large array gain. A challenge resides in partitioning the many nodes into groups that can communicate simultaneously such that the mutual interference is minimized. Here we propose node partitioning strategies that do not require full channel state information, but rather are based on nodes’ respective directional channel properties. In our considered scenarios, these typically have a time constant that is far larger than the coherence time of the channel. We developed both an optimal and an approximation algorithm to partition users based on directional channel properties, and evaluated them numerically. Our results show that both algorithms, despite using only these directional channel properties, achieve similar performance in terms of the minimum signal-to-interference-plus-noise ratio for any user, compared with a reference method using full channel knowledge. In particular, we demonstrate that grouping nodes with related directional properties is to be avoided. We hence realize a simple partitioning method, requiring minimal information to be collected from the nodes, and in which this information typically remains stable over the long term, thus promoting the system’s autonomy and energy efficiency. Full article
(This article belongs to the Special Issue Edge Computing for the IoT)
Show Figures

Graphical abstract

19 pages, 2155 KiB  
Article
Assisting Educational Analytics with AutoML Functionalities
by Spyridon Garmpis, Manolis Maragoudakis and Aristogiannis Garmpis
Computers 2022, 11(6), 97; https://doi.org/10.3390/computers11060097 - 15 Jun 2022
Cited by 2 | Viewed by 2803
Abstract
The plethora of changes that have taken place in policy formulations on higher education in recent years in Greece has led to unification, the abolition of departments or technological educational institutions (TEI) and mergers at universities. As a result, many students are required [...] Read more.
The plethora of changes that have taken place in policy formulations on higher education in recent years in Greece has led to unification, the abolition of departments or technological educational institutions (TEI) and mergers at universities. As a result, many students are required to complete their studies in departments of the abolished TEI. Dropout or a delay in graduation is a significant problem that results from newly joined students at the university, in addition to the provision of studies. There are various reasons for this, with student performance during studies being one of the major contributing factors. This study was aimed at predicting the time required for weak students to pass their courses so as to allow the university to develop strategic programs that will help them improve performance and graduate in time. This paper presents various components of educational data mining incorporating a new state-of-the-art strategy, called AutoML, which is used to find the best models and parameters and is capable of predicting the length of time required for students to pass their courses using their past course performance and academic information. A dataset of 23,687 “Computer Networking” module students was used to train and evaluate the classification of a model developed in the KNIME Analytics (open source) data science platform. The accuracy of the model was measured using well-known evaluation criteria, such as precision, recall, and F-measure. The model was applied to data related to three basic courses and correctly predicted approximately 92% of students’ performance and, specifically, students who are likely to drop out or experience a delay before graduating. Full article
Show Figures

Graphical abstract

29 pages, 11989 KiB  
Article
Accidental Choices—How JVM Choice and Associated Build Tools Affect Interpreter Performance
by Jonathan Lambert, Rosemary Monahan and Kevin Casey
Computers 2022, 11(6), 96; https://doi.org/10.3390/computers11060096 - 14 Jun 2022
Cited by 1 | Viewed by 3114
Abstract
Considering the large number of optimisation techniques that have been integrated into the design of the Java Virtual Machine (JVM) over the last three decades, the Java interpreter continues to persist as a significant bottleneck in the performance of bytecode execution. This paper [...] Read more.
Considering the large number of optimisation techniques that have been integrated into the design of the Java Virtual Machine (JVM) over the last three decades, the Java interpreter continues to persist as a significant bottleneck in the performance of bytecode execution. This paper examines the relationship between Java Runtime Environment (JRE) performance concerning the interpreted execution of Java bytecode and the effect modern compiler selection and integration within the JRE build toolchain has on that performance. We undertook this evaluation relative to a contemporary benchmark suite of application workloads, the Renaissance Benchmark Suite. Our results show that the choice of GNU GCC compiler version used within the JRE build toolchain statistically significantly affects runtime performance. More importantly, not all OpenJDK releases and JRE JVM interpreters are equal. Our results show that OpenJDK JVM interpreter performance is associated with benchmark workload. In addition, in some cases, rolling back to an earlier OpenJDK version and using a more recent GNU GCC compiler within the build toolchain of the JRE can significantly positively impact JRE performance. Full article
(This article belongs to the Special Issue Code Generation, Analysis and Quality Testing)
Show Figures

Figure 1

17 pages, 1090 KiB  
Review
Blockchain Technology toward Creating a Smart Local Food Supply Chain
by Jovanka Damoska Sekuloska and Aleksandar Erceg
Computers 2022, 11(6), 95; https://doi.org/10.3390/computers11060095 - 13 Jun 2022
Cited by 25 | Viewed by 6229
Abstract
The primary purpose of the supply chains is to ensure and secure the availability and smooth flow of the necessary resources for efficient production processes and consumption. Supply chain activities have been experiencing significant changes due to the importance and creation of the [...] Read more.
The primary purpose of the supply chains is to ensure and secure the availability and smooth flow of the necessary resources for efficient production processes and consumption. Supply chain activities have been experiencing significant changes due to the importance and creation of the integrated process. Blockchain is viewed as an innovative tool for transforming supply chain management’s (SCM’s) actual business model; on the other hand, the SCM provides an applicative value of blockchain technology. The research is focused on examining the influence of blockchain technology on the increasing efficiency, transparency, auditability, traceability, and security issues of the food supply chain (FSC), with particular attention to the local food supply chain (LFSC). The main objective of the research is to suggest the implementation of blockchain technology in the local food supply chain as a niche of the food industry. The result of the research is the identification of a three-layers model of a smart local food supply chain. The model provides efficient and more transparent tracking across the local food supply chain, improving food accessibility, traceability, and safety. Full article
(This article belongs to the Special Issue Blockchain-Based Systems)
Show Figures

Figure 1

19 pages, 16212 KiB  
Article
Non-Zero Crossing Point Detection in a Distorted Sinusoidal Signal Using Logistic Regression Model
by Venkataramana Veeramsetty, Srividya Srinivasula and Surender Reddy Salkuti
Computers 2022, 11(6), 94; https://doi.org/10.3390/computers11060094 - 11 Jun 2022
Cited by 2 | Viewed by 2335
Abstract
Non-Zero crossing point detection in a sinusoidal signal is essential in case of various power system and power electronics applications like power system protection and power converters controller design. In this paper 96 data sets are created from a distorted sinusoidal signal based [...] Read more.
Non-Zero crossing point detection in a sinusoidal signal is essential in case of various power system and power electronics applications like power system protection and power converters controller design. In this paper 96 data sets are created from a distorted sinusoidal signal based on MATLAB simulation. Distorted sinusoidal signals are generated in MATLAB with various noise and harmonic levels. In this paper, logistic regression model is used to predict the non-zero crossing point in a distorted signal based on input features like slope, intercept, correlation and RMSE. Logistic regression model is trained and tested in Google Colab environment. As per simulation results, it is observed that logistic regression model is able to predict all non-zero-crossing point in a distorted signal. Full article
(This article belongs to the Special Issue Computing, Electrical and Industrial Systems 2022)
Show Figures

Figure 1

17 pages, 2527 KiB  
Article
Automated Detection of Left Bundle Branch Block from ECG Signal Utilizing the Maximal Overlap Discrete Wavelet Transform with ANFIS
by Bassam Al-Naami, Hossam Fraihat, Hamza Abu Owida, Khalid Al-Hamad, Roberto De Fazio and Paolo Visconti
Computers 2022, 11(6), 93; https://doi.org/10.3390/computers11060093 - 10 Jun 2022
Cited by 17 | Viewed by 3136
Abstract
Left bundle branch block (LBBB) is a common disorder in the heart’s electrical conduction system that leads to the ventricles’ uncoordinated contraction. The complete LBBB is usually associated with underlying heart failure and other cardiac diseases. Therefore, early automated detection is vital. This [...] Read more.
Left bundle branch block (LBBB) is a common disorder in the heart’s electrical conduction system that leads to the ventricles’ uncoordinated contraction. The complete LBBB is usually associated with underlying heart failure and other cardiac diseases. Therefore, early automated detection is vital. This work aimed to detect the LBBB through the QRS electrocardiogram (ECG) complex segments taken from the MIT-BIH arrhythmia database. The used data contain 2655 LBBB (abnormal) and 1470 normal signals (i.e., 4125 total signals). The proposed method was employed in the following steps: (i) QRS segmentation and filtration, (ii) application of the Maximal Overlapped Discrete Wavelet Transform (MODWT) on the ECG R wave, (iii) selection of the detailed coefficients of the MODWT (D2, D3, D4), kurtosis, and skewness as extracted features to be fed into the Adaptive Neuro-Fuzzy Inference System (ANFIS) classifier. The obtained results proved that the proposed method performed well based on the achieved sensitivity, specificity, and classification accuracies of 99.81%, 100%, and 99.88%, respectively (F-Score is equal to 0.9990). Our results showed that the proposed method was robust and effective and could be used in real clinical situations. Full article
(This article belongs to the Special Issue Advances of Machine and Deep Learning in the Health Domain)
Show Figures

Figure 1

16 pages, 1330 KiB  
Article
Assessment of Virtual Reality among University Professors: Influence of the Digital Generation
by Álvaro Antón-Sancho, Pablo Fernández-Arias and Diego Vergara
Computers 2022, 11(6), 92; https://doi.org/10.3390/computers11060092 - 10 Jun 2022
Cited by 14 | Viewed by 2930
Abstract
This paper conducts quantitative research on the assessment made by a group of 623 Spanish and Latin American university professors about the use of virtual reality technologies in the classroom and their own digital skills in this respect. The main objective is to [...] Read more.
This paper conducts quantitative research on the assessment made by a group of 623 Spanish and Latin American university professors about the use of virtual reality technologies in the classroom and their own digital skills in this respect. The main objective is to analyze the differences that exist in this regard due to the digital generation of the professors (immigrants or digital natives). As an instrument, a survey designed for this purpose was used, the validity of which has been tested in the study. It was found that digital natives say they are more competent in the use of virtual reality and value its technical and didactic aspects more highly, although they also identify more disadvantages in its use than digital immigrants. Differences in responses were found by gender and areas of knowledge of the professors with respect to the opinions expressed. It is suggested that universities design training plans on teaching digital competence and include in them the didactic use of virtual reality technologies in higher education. Full article
Show Figures

Graphical abstract

10 pages, 424 KiB  
Article
Functional Data Analysis for Imaging Mean Function Estimation: Computing Times and Parameter Selection
by Juan A. Arias-López, Carmen Cadarso-Suárez and Pablo Aguiar-Fernández
Computers 2022, 11(6), 91; https://doi.org/10.3390/computers11060091 - 2 Jun 2022
Viewed by 2354
Abstract
In the field of medical imaging, one of the most extended research setups consists of the comparison between two groups of images, a pathological set against a control set, in order to search for statistically significant differences in brain activity. Functional Data Analysis [...] Read more.
In the field of medical imaging, one of the most extended research setups consists of the comparison between two groups of images, a pathological set against a control set, in order to search for statistically significant differences in brain activity. Functional Data Analysis (FDA), a relatively new field of statistics dealing with data expressed in the form of functions, uses methodologies which can be easily extended to the study of imaging data. Examples of this have been proposed in previous publications where the authors settle the mathematical groundwork and properties of the proposed estimators. The methodology herein tested allows for the estimation of mean functions and simultaneous confidence corridors (SCC), also known as simultaneous confidence bands, for imaging data and for the difference between two groups of images. FDA applied to medical imaging presents at least two advantages compared to previous methodologies: it avoids loss of information in complex data structures and avoids the multiple comparison problem arising from traditional pixel-to-pixel comparisons. Nonetheless, computing times for this technique have only been explored in reduced and simulated setups. In the present article, we apply this procedure to a practical case with data extracted from open neuroimaging databases; then, we measure computing times for the construction of Delaunay triangulations and for the computation of mean function and SCC for one-group and two-group approaches. The results suggest that the previous researcher has been too conservative in parameter selection and that computing times for this methodology are reasonable, confirming that this method should be further studied and applied to the field of medical imaging. Full article
(This article belongs to the Special Issue Selected Papers from ICCSA 2021)
Show Figures

Figure 1

14 pages, 798 KiB  
Article
Can We Trust Edge Computing Simulations? An Experimental Assessment
by Gonçalo Carvalho, Filipe Magalhães, Bruno Cabral, Vasco Pereira and Jorge Bernardino
Computers 2022, 11(6), 90; https://doi.org/10.3390/computers11060090 - 31 May 2022
Cited by 1 | Viewed by 2251
Abstract
Simulators allow for the simulation of real-world environments that would otherwise be financially costly and difficult to implement at a technical level. Thus, a simulation environment facilitates the implementation and development of use cases, rendering such development cost-effective and faster, and it can [...] Read more.
Simulators allow for the simulation of real-world environments that would otherwise be financially costly and difficult to implement at a technical level. Thus, a simulation environment facilitates the implementation and development of use cases, rendering such development cost-effective and faster, and it can be used in several scenarios. There are some works about simulation environments in Edge Computing (EC), but there is a gap of studies that state the validity of these simulators. This paper compares the execution of the EdgeBench benchmark in a real-world environment and in a simulation environment using FogComputingSim, an EC simulator. Overall, the simulated environment was 0.2% faster than the real world, thus allowing for us to state that we can trust EC simulations, and to conclude that it is possible to implement and validate proofs of concept with FogComputingSim. Full article
(This article belongs to the Special Issue Edge Computing for the IoT)
Show Figures

Graphical abstract

26 pages, 3039 KiB  
Article
Release Planning Patterns for the Automotive Domain
by Kristina Marner, Stefan Wagner and Guenther Ruhe
Computers 2022, 11(6), 89; https://doi.org/10.3390/computers11060089 - 30 May 2022
Cited by 3 | Viewed by 3495
Abstract
Context: Today’s vehicle development is focusing more and more on handling the vast amount of software and hardware inside the vehicle. The resulting planning and development of the software especially confronts original equipment manufacturers (OEMs) with major challenges that have to be mastered. [...] Read more.
Context: Today’s vehicle development is focusing more and more on handling the vast amount of software and hardware inside the vehicle. The resulting planning and development of the software especially confronts original equipment manufacturers (OEMs) with major challenges that have to be mastered. This makes effective and efficient release planning that provides the development scope in the required quality even more important. In addition, the OEMs have to deal with boundary conditions given by the OEM itself and the standards as well as legislation the software and hardware have to conform to. Release planning is a key activity for successfully developing vehicles. Objective: The aim of this work is to introduce release planning patterns to simplify the release planning of software and hardware installed in a vehicle. Method: We followed a pattern identification process that was conducted at Dr. Ing. h. c. F. Porsche AG. Results: We introduce eight release planning patterns, which both address the fixed boundary conditions and structure the actual planning content of a release plan. The patterns address an automotive context and have been developed from a hardware and software point of view based on two examples from the case company. Conclusions: The presented patterns address recurring problems in an automotive context and are based on real life examples. The gathered knowledge can be used for further application in practice and related domains. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

20 pages, 2062 KiB  
Article
Predicting the Category and the Length of Punishment in Indonesian Courts Based on Previous Court Decision Documents
by Eka Qadri Nuranti, Evi Yulianti and Husna Sarirah Husin
Computers 2022, 11(6), 88; https://doi.org/10.3390/computers11060088 - 30 May 2022
Cited by 6 | Viewed by 3441
Abstract
Among the sources of legal considerations are judges’ previous decisions regarding similar cases that are archived in court decision documents. However, due to the increasing number of court decision documents, it is difficult to find relevant information, such as the category and the [...] Read more.
Among the sources of legal considerations are judges’ previous decisions regarding similar cases that are archived in court decision documents. However, due to the increasing number of court decision documents, it is difficult to find relevant information, such as the category and the length of punishment for similar legal cases. This study presents predictions of first-level judicial decisions by utilizing a collection of Indonesian court decision documents. We propose using multi-level learning, namely, CNN+attention, using decision document sections as features to predict the category and the length of punishment in Indonesian courts. Our results demonstrate that the decision document sections that strongly affected the accuracy of the prediction model were prosecution history, facts, legal facts, and legal considerations. The prediction of the punishment category shows that the CNN+attention model achieved better accuracy than other deep learning models, such as CNN, LSTM, BiLSTM, LSTM+attention, and BiLSTM+attention, by up to 28.18%. The superiority of the CNN+attention model is also shown to predict the punishment length, with the best result being achieved using the ‘year’ time unit. Full article
Show Figures

Graphical abstract

17 pages, 923 KiB  
Article
The Potential of AR Solutions for Behavioral Learning: A Scoping Review
by Crispino Tosto, Farzin Matin, Luciano Seta, Giuseppe Chiazzese, Antonella Chifari, Marco Arrigo, Davide Taibi, Mariella Farella and Eleni Mangina
Computers 2022, 11(6), 87; https://doi.org/10.3390/computers11060087 - 30 May 2022
Cited by 6 | Viewed by 4198
Abstract
In recent years, educational researchers and practitioners have become increasingly interested in new technologies for teaching and learning, including augmented reality (AR). The literature has already highlighted the benefit of AR in enhancing learners’ outcomes in natural sciences, with a limited number of [...] Read more.
In recent years, educational researchers and practitioners have become increasingly interested in new technologies for teaching and learning, including augmented reality (AR). The literature has already highlighted the benefit of AR in enhancing learners’ outcomes in natural sciences, with a limited number of studies exploring the support of AR in social sciences. Specifically, there have been a number of systematic and scoping reviews in the AR field, but no peer-reviewed review studies on the contribution of AR within interventions aimed at teaching or training behavioral skills have been published to date. In addition, most AR research focuses on technological or development issues. However, limited studies have explored how technology affects social experiences and, in particular, the impact of using AR on social behavior. To address these research gaps, a scoping review was conducted to identify and analyze studies on the use of AR within interventions to teach behavioral skills. These studies were conducted across several intervention settings. In addition to this research question, the review reports an investigation of the literature regarding the impact of AR technology on social behavior. The state of the art of AR solutions designed for interventions in behavioral teaching and learning is presented, with an emphasis on educational and clinical settings. Moreover, some relevant dimensions of the impact of AR on social behavior are discussed in more detail. Limitations of the reviewed AR solutions and implications for future research and development efforts are finally discussed. Full article
Show Figures

Graphical abstract

21 pages, 5901 KiB  
Article
Energy-Efficient Deterministic Approach for Coverage Hole Detection in Wireless Underground Sensor Network: Mathematical Model and Simulation
by Priyanka Sharma and Rishi Pal Singh
Computers 2022, 11(6), 86; https://doi.org/10.3390/computers11060086 - 26 May 2022
Cited by 3 | Viewed by 2276
Abstract
Wireless underground sensor networks (WUSNs) are being used in agricultural applications, in border patrol, and in the monitoring of remote areas. Coverage holes in WUSNs are an issue which needs to be dealt with. Coverage holes may occur due to random deployment of [...] Read more.
Wireless underground sensor networks (WUSNs) are being used in agricultural applications, in border patrol, and in the monitoring of remote areas. Coverage holes in WUSNs are an issue which needs to be dealt with. Coverage holes may occur due to random deployment of nodes as well as the failure of nodes with time. In this paper, a mathematical approach for hole detection using Delaunay geometry is proposed which divides the network region into Delaunay triangles and applies the laws of triangles to identify coverage holes. WUSNs comprise static nodes, and replacing underground nodes is a complex task. A simplistic algorithm for detecting coverage holes in static WSNs/WUSNs is proposed. The algorithm was simulated in the region of interest for the initially randomly deployed network and after energy depletion of the nodes with time. The performance of the algorithm was evaluated by varying the number of nodes and the sensing radius of the nodes. Our scheme is advantageous over other schemes in the following aspects: (1) it builds a mathematical model and polynomial-time algorithm for detecting holes, and (2) the scheme does not work on centralized computation and therefore provides better scalability, (3) is energy-efficient, and (4) provides a cost-effective solution to detect holes with great accuracy and a low detection time. The algorithm takes less than 0.1 milliseconds to detect holes in a 100 m × 100 m-size network with 100 sensor nodes having a sensing radius of 8 m. The detection time shows only a linear change with an increase in the number of nodes in the network, which makes the algorithm applicable for every network size from small to large. Full article
Show Figures

Figure 1

18 pages, 837 KiB  
Article
Improved Bidirectional GAN-Based Approach for Network Intrusion Detection Using One-Class Classifier
by Wen Xu, Julian Jang-Jaccard, Tong Liu, Fariza Sabrina and Jin Kwak
Computers 2022, 11(6), 85; https://doi.org/10.3390/computers11060085 - 26 May 2022
Cited by 20 | Viewed by 4155
Abstract
Existing generative adversarial networks (GANs), primarily used for creating fake image samples from natural images, demand a strong dependence (i.e., the training strategy of the generators and the discriminators require to be in sync) for the generators to produce as realistic fake samples [...] Read more.
Existing generative adversarial networks (GANs), primarily used for creating fake image samples from natural images, demand a strong dependence (i.e., the training strategy of the generators and the discriminators require to be in sync) for the generators to produce as realistic fake samples that can “fool” the discriminators. We argue that this strong dependency required for GAN training on images does not necessarily work for GAN models for network intrusion detection tasks. This is because the network intrusion inputs have a simpler feature structure such as relatively low-dimension, discrete feature values, and smaller input size compared to the existing GAN-based anomaly detection tasks proposed on images. To address this issue, we propose a new Bidirectional GAN (Bi-GAN) model that is better equipped for network intrusion detection with reduced overheads involved in excessive training. In our proposed method, the training iteration of the generator (and accordingly the encoder) is increased separate from the training of the discriminator until it satisfies the condition associated with the cross-entropy loss. Our empirical results show that this proposed training strategy greatly improves the performance of both the generator and the discriminator even in the presence of imbalanced classes. In addition, our model offers a new construct of a one-class classifier using the trained encoder–discriminator. The one-class classifier detects anomalous network traffic based on binary classification results instead of calculating expensive and complex anomaly scores (or thresholds). Our experimental result illustrates that our proposed method is highly effective to be used in network intrusion detection tasks and outperforms other similar generative methods on two datasets: NSL-KDD and CIC-DDoS2019 datasets. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

11 pages, 8125 KiB  
Article
Improving Multi-View Camera Calibration Using Precise Location of Sphere Center Projection
by Alberto J. Perez, Javier Perez-Soler, Juan-Carlos Perez-Cortes and Jose-Luis Guardiola
Computers 2022, 11(6), 84; https://doi.org/10.3390/computers11060084 - 24 May 2022
Cited by 1 | Viewed by 2540
Abstract
Several calibration algorithms use spheres as calibration tokens because of the simplicity and uniform shape that a sphere presents across multiple views, along with the simplicity of its construction. Among the alternatives are the use of complex 3D tokens with reference marks, usually [...] Read more.
Several calibration algorithms use spheres as calibration tokens because of the simplicity and uniform shape that a sphere presents across multiple views, along with the simplicity of its construction. Among the alternatives are the use of complex 3D tokens with reference marks, usually complex to build and analyze with the required accuracy; or the search of common features in scene images, with this task being of high complexity too due to perspective changes. Some of the algorithms using spheres rely on the estimation of the sphere center projection obtained from the camera images to proceed. The computation of these projection points from the sphere silhouette on the images is not straightforward because it does not match exactly the silhouette centroid. Thus, several methods have been developed to cope with this calculation. In this work, a simple and fast numerical method adapted to precisely compute the sphere center projection for these algorithms is presented. The benefits over other similar existing methods are its ease of implementation and that it presents less sensibility to segmentation issues. Other possible applications of the proposed method are presented too. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop