Next Issue
Volume 11, March
Previous Issue
Volume 11, January

Computers, Volume 11, Issue 2 (February 2022) – 15 articles

Cover Story (view full-size image): Function point analysis is a widely used metric in the software industry for development effort estimation. While the software industry has grown rapidly, the weight values specified for standard function point counting have remained the same since its inception. Another problem is that software development in different industry sectors is peculiar, but basic rules apply to all. These raise important questions about the validity of weight values in practical applications. In this study, we propose an algorithm calibrating the standardized functional complexity weights, aiming to estimate a more accurate software size that fits specific software applications, reflects software industry trends, and improves the effort estimation of software projects. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
Article
Assessment of SQL and NoSQL Systems to Store and Mine COVID-19 Data
Computers 2022, 11(2), 29; https://doi.org/10.3390/computers11020029 - 21 Feb 2022
Viewed by 1055
Abstract
COVID-19 has provoked enormous negative impacts on human lives and the world economy. In order to help in the fight against this pandemic, this study evaluates different databases’ systems and selects the most suitable for storing, handling, and mining COVID-19 data. We evaluate [...] Read more.
COVID-19 has provoked enormous negative impacts on human lives and the world economy. In order to help in the fight against this pandemic, this study evaluates different databases’ systems and selects the most suitable for storing, handling, and mining COVID-19 data. We evaluate different SQL and NoSQL database systems using the following metrics: query runtime, memory used, CPU used, and storage size. The databases systems assessed were Microsoft SQL Server, MongoDB, and Cassandra. We also evaluate Data Mining algorithms, including Decision Trees, Random Forest, Naive Bayes, and Logistic Regression using Orange Data Mining software data classification tests. Classification tests were performed using cross-validation in a table with about 3 M records, including COVID-19 exams with patients’ symptoms. The Random Forest algorithm has obtained the best average accuracy, recall, precision, and F1 Score in the COVID-19 predictive model performed in the mining stage. In performance evaluation, MongoDB has presented the best results for almost all tests with a large data volume. Full article
Show Figures

Graphical abstract

Review
An Overview of Augmented Reality
Computers 2022, 11(2), 28; https://doi.org/10.3390/computers11020028 - 19 Feb 2022
Viewed by 1301
Abstract
Modern society is increasingly permeated by realities parallel to the real one. The so-called virtual reality is now part of both current habits and many activities carried out during the day. Virtual reality (VR) is, in turn, related to the concept of augmented [...] Read more.
Modern society is increasingly permeated by realities parallel to the real one. The so-called virtual reality is now part of both current habits and many activities carried out during the day. Virtual reality (VR) is, in turn, related to the concept of augmented reality (AR). It represents a technology still in solid expansion but which was created and imagined several decades ago. This paper presents an overview of augmented reality, starting from its conception, passing through its main applications, and providing essential information. Part of the article will be devoted to hardware and software components used in AR systems. The last part of the paper highlights the limitations related to the design of these systems, the shortcomings in this area, and the possible future fields of application of this extraordinary technological innovation. Full article
(This article belongs to the Special Issue Applications of Augmented Reality on Maintenance of A Vehicle)
Show Figures

Figure 1

Article
Detection of Abnormal SIP Signaling Patterns: A Deep Learning Comparison
Computers 2022, 11(2), 27; https://doi.org/10.3390/computers11020027 - 17 Feb 2022
Viewed by 704
Abstract
This paper investigates the detection of abnormal sequences of signaling packets purposely generated to perpetuate signaling-based attacks in computer networks. The problem is studied for the Session Initiation Protocol (SIP) using a dataset of signaling packets exchanged by multiple end-users. A sequence of [...] Read more.
This paper investigates the detection of abnormal sequences of signaling packets purposely generated to perpetuate signaling-based attacks in computer networks. The problem is studied for the Session Initiation Protocol (SIP) using a dataset of signaling packets exchanged by multiple end-users. A sequence of SIP messages never observed before can indicate possible exploitation of a vulnerability and its detection or prediction is of high importance to avoid security attacks due to unknown abnormal SIP dialogs. The paper starts to briefly characterize the adopted dataset and introduces multiple definitions to detail how the deep learning-based approach is adopted to detect possible attacks. The proposed solution is based on a convolutional neural network capable of exploring the definition of an orthogonal space representing the SIP dialogs. The space is then used to train the neural network model to classify the type of SIP dialog according to a sequence of SIP packets prior observed. The classifier of unknown SIP dialogs relies on the statistical properties of the supervised learning of known SIP dialogs. Experimental results are presented to assess the solution in terms of SIP dialogs prediction, unknown SIP dialogs detection, and computational performance, demonstrating the usefulness of the proposed methodology to rapidly detect signaling-based attacks. Full article
(This article belongs to the Special Issue Computing, Electrical and Industrial Systems 2021)
Show Figures

Figure 1

Systematic Review
Deep Learning (CNN, RNN) Applications for Smart Homes: A Systematic Review
Computers 2022, 11(2), 26; https://doi.org/10.3390/computers11020026 - 16 Feb 2022
Viewed by 1144
Abstract
In recent years, research on convolutional neural networks (CNN) and recurrent neural networks (RNN) in deep learning has been actively conducted. In order to provide more personalized and advanced functions in smart home services, studies on deep learning applications are becoming more frequent, [...] Read more.
In recent years, research on convolutional neural networks (CNN) and recurrent neural networks (RNN) in deep learning has been actively conducted. In order to provide more personalized and advanced functions in smart home services, studies on deep learning applications are becoming more frequent, and deep learning is acknowledged as an efficient method for recognizing the voices and activities of users. In this context, this study aims to systematically review the smart home studies that apply CNN and RNN/LSTM as their main solution. Of the 632 studies retrieved from the Web of Science, Scopus, IEEE Explore, and PubMed databases, 43 studies were selected and analyzed according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology. In this paper, we examine which smart home applications CNN and RNN/LSTM are applied to and compare how they were implemented and evaluated. The selected studies dealt with a total of 15 application areas for smart homes, where activity recognition was covered the most. This study provides essential data for all researchers who want to apply deep learning for smart homes, identifies the main trends, and can help to guide design and evaluation decisions for particular smart home services. Full article
(This article belongs to the Special Issue Survey in Deep Learning for IoT Applications)
Show Figures

Figure 1

Article
Performance of a Live Multi-Gateway LoRaWAN and Interference Measurement across Indoor and Outdoor Localities
Computers 2022, 11(2), 25; https://doi.org/10.3390/computers11020025 - 11 Feb 2022
Viewed by 786
Abstract
Little work has been reported on the magnitude and impact of interference with the performance of Internet of Things (IoT) applications operated by Long-Range Wide-Area Network (LoRaWAN) in the unlicensed 868 MHz Industrial, Scientific, and Medical (ISM) band. The propagation performance and signal [...] Read more.
Little work has been reported on the magnitude and impact of interference with the performance of Internet of Things (IoT) applications operated by Long-Range Wide-Area Network (LoRaWAN) in the unlicensed 868 MHz Industrial, Scientific, and Medical (ISM) band. The propagation performance and signal activity measurement of such technologies can give many insights to effectively build long-range wireless communications in a Non-Line of Sight (NLOS) environment. In this paper, the performance of a live multi-gateway in indoor office site in Glasgow city was analysed in 26 days of traffic measurement. The indoor network performances were compared to similar performance measurements from outdoor LoRaWAN test traffic generated across Glasgow Central Business District (CBD) and elsewhere on the same LoRaWAN. The results revealed 99.95% packet transfer success on the first attempt in the indoor site compared to 95.7% at the external site. The analysis shows that interference is attributed to nearly 50 X greater LoRaWAN outdoor packet loss than indoor. The interference measurement results showed a 13.2–97.3% and 4.8–54% probability of interfering signals, respectively, in the mandatory Long-Range (LoRa) uplink and downlink channels, capable of limiting LoRa coverage in some areas. Full article
(This article belongs to the Special Issue Edge Computing for the IoT)
Show Figures

Graphical abstract

Review
A Critical Review of Blockchain Acceptance Models—Blockchain Technology Adoption Frameworks and Applications
Computers 2022, 11(2), 24; https://doi.org/10.3390/computers11020024 - 08 Feb 2022
Cited by 1 | Viewed by 1631
Abstract
Blockchain is a promising breakthrough technology that is highly applicable in manifold sectors. The adoption of blockchain technology is accompanied by a range of issues and challenges that make its implementation complicated. To facilitate the successful implementation of blockchain technology, several blockchain adoption [...] Read more.
Blockchain is a promising breakthrough technology that is highly applicable in manifold sectors. The adoption of blockchain technology is accompanied by a range of issues and challenges that make its implementation complicated. To facilitate the successful implementation of blockchain technology, several blockchain adoption frameworks have been developed. However, selecting the appropriate framework based on the conformity of its features with the business sector may be challenging for decision-makers. This study aims to provide a systematic literature review to introduce the adoption frameworks that are most used to assess blockchain adoption and realize business sectors that these models have been applied. Thus, the blockchain adoption models in 56 articles are reviewed and the results of the studies are summarized by categorizing the articles into five main sections including supply chain, industries, financial sector, cryptocurrencies, and other articles (excluded from the former fields). The findings of the study show that the models based on the technology acceptance model (TAM), technology–organization–environment (TOE), and new conceptual frameworks were the focus of the majority of selected articles. Most of the articles have focused on blockchain adoption in different industry fields and supply chain areas. Full article
(This article belongs to the Special Issue Blockchain-Based Systems)
Show Figures

Graphical abstract

Article
A Project-Scheduling and Resource Management Heuristic Algorithm in the Construction of Combined Cycle Power Plant Projects
Computers 2022, 11(2), 23; https://doi.org/10.3390/computers11020023 - 07 Feb 2022
Viewed by 700
Abstract
Given the growing number of development projects, proper project planning and management are crucial. The purpose of this paper is to introduce a heuristic algorithm for scheduling a power plant project construction and project resource management to determine the size of project buffers [...] Read more.
Given the growing number of development projects, proper project planning and management are crucial. The purpose of this paper is to introduce a heuristic algorithm for scheduling a power plant project construction and project resource management to determine the size of project buffers and feeding buffers. This algorithm consists of three steps: 1. estimating the duration of project activities; 2. determining the size of the project buffer and feeding buffers; and 3. simulating the mentioned algorithm, which will be explained below. Innovations of this research are as follows: estimating the exact duration of project activities by using a heuristic algorithm, in addition to determining the buffer size; calculating both project buffer and feeding buffers; and applying the algorithm to implement an ACC used in combined cycle power plant projects as a numerical example. In order to evaluate the proposed algorithm, inputs from this project were run through several algorithms recently presented. The results showed that a suitable amount of buffers can be allocated for projects using this algorithm. Full article
(This article belongs to the Special Issue Smart Factories and Production Systems)
Show Figures

Graphical abstract

Article
Hierarchical Control for DC Microgrids Using an Exact Feedback Controller with Integral Action
Computers 2022, 11(2), 22; https://doi.org/10.3390/computers11020022 - 06 Feb 2022
Viewed by 653
Abstract
This paper addresses the problem of the optimal stabilization of DC microgrids using a hierarchical control design. A recursive optimal power flow formulation is proposed in the tertiary stage that ensures the global optimum finding due to the convexity of the proposed quadratic [...] Read more.
This paper addresses the problem of the optimal stabilization of DC microgrids using a hierarchical control design. A recursive optimal power flow formulation is proposed in the tertiary stage that ensures the global optimum finding due to the convexity of the proposed quadratic optimization model in determining the equilibrium operative point of the DC microgrid as a function of the demand and generation inputs. An exact feedback controller with integral action is applied in the primary and secondary controller layers, which ensures asymptotic stability in the sense of Lyapunov for the voltage variables. The dynamical model of the network is obtained in a set of reduced nodes that only includes constant power terminals interfaced through power electronic converters. This reduced model is obtained by applying Kron’s reduction to the linear loads and step nodes in the DC grid. Numerical simulations in a DC microgrid with radial structure demonstrate the effectiveness and robustness of the proposed hierarchical controller in maintaining the stability of all the voltage profiles in the DC microgrid, independent of the load and generation variations. Full article
(This article belongs to the Special Issue Computing, Electrical and Industrial Systems 2022)
Show Figures

Figure 1

Article
Techniques for Skeletal-Based Animation in Massive Crowd Simulations
Computers 2022, 11(2), 21; https://doi.org/10.3390/computers11020021 - 04 Feb 2022
Viewed by 717
Abstract
Crowd systems play an important role in virtual environment applications, such as those used in entertainment, education, training, and different simulation systems. Performance and scalability are key factors, and it is desirable for crowds to be simulated with as few resources as possible [...] Read more.
Crowd systems play an important role in virtual environment applications, such as those used in entertainment, education, training, and different simulation systems. Performance and scalability are key factors, and it is desirable for crowds to be simulated with as few resources as possible while providing variety and realism for agents. This paper focuses on improving the performance, variety, and usability of crowd animation systems. Performing the blending operation on the Graphics Processing Unit (GPU) side requires no additional memory other than the source and target animation streams and greatly increases the number of agents that can simultaneously transition from one state to another. A time dilation offset feature helps applications with a large number of animation assets and/or agents to achieve sufficient visual quality, variety, and good performance at the same time by moving animation streams between the running and paused states at runtime. Splitting agents into parts not only reduces asset creation costs by eliminating the need to create permutations of skeletons and assets but also allows users to attach parts dynamically to agents. Full article
Show Figures

Figure 1

Article
One View Is Not Enough: Review of and Encouragement for Multiple and Alternative Representations in 3D and Immersive Visualisation
Computers 2022, 11(2), 20; https://doi.org/10.3390/computers11020020 - 03 Feb 2022
Viewed by 734
Abstract
The opportunities for 3D visualisations are huge. People can be immersed inside their data, interface with it in natural ways, and see it in ways that are not possible on a traditional desktop screen. Indeed, 3D visualisations, especially those that are immersed inside [...] Read more.
The opportunities for 3D visualisations are huge. People can be immersed inside their data, interface with it in natural ways, and see it in ways that are not possible on a traditional desktop screen. Indeed, 3D visualisations, especially those that are immersed inside head-mounted displays are becoming popular. Much of this growth is driven by the availability, popularity and falling cost of head-mounted displays and other immersive technologies. However, there are also challenges. For example, data visualisation objects can be obscured, important facets missed (perhaps behind the viewer), and the interfaces may be unfamiliar. Some of these challenges are not unique to 3D immersive technologies. Indeed, developers of traditional 2D exploratory visualisation tools would use alternative views, across a multiple coordinated view (MCV) system. Coordinated view interfaces help users explore the richness of the data. For instance, an alphabetical list of people in one view shows everyone in the database, while a map view depicts where they live. Each view provides a different task or purpose. While it is possible to translate some desktop interface techniques into the 3D immersive world, it is not always clear what equivalences would be. In this paper, using several case studies, we discuss the challenges and opportunities for using multiple views in immersive visualisation. Our aim is to provide a set of concepts that will enable developers to perform critical thinking, creative thinking and push the boundaries of what is possible with 3D and immersive visualisation. In summary developers should consider how to integrate many views, techniques and presentation styles, and one view is not enough when using 3D and immersive visualisations. Full article
(This article belongs to the Special Issue Selected Papers from Computer Graphics & Visual Computing (CGVC 2021))
Show Figures

Graphical abstract

Article
Tangible and Personalized DS Application Approach in Cultural Heritage: The CHATS Project
Computers 2022, 11(2), 19; https://doi.org/10.3390/computers11020019 - 31 Jan 2022
Viewed by 855
Abstract
Storytelling is widely used to project cultural elements and engage people emotionally. Digital storytelling enhances the process by integrating images, music, narrative, and voice along with traditional storytelling methods. Newer visualization technologies such as Augmented Reality allow more vivid representations and further influence [...] Read more.
Storytelling is widely used to project cultural elements and engage people emotionally. Digital storytelling enhances the process by integrating images, music, narrative, and voice along with traditional storytelling methods. Newer visualization technologies such as Augmented Reality allow more vivid representations and further influence the way museums present their narratives. Cultural institutions aim towards integrating such technologies in order to provide a more engaging experience, which is also tailored to the user by exploiting personalization and context-awareness. This paper presents CHATS, a system for personalized digital storytelling in cultural heritage sites. Storytelling is based on a tangible interface, which adds a gamification aspect and improves interactivity for people with visual impairment. Technologies of AR and Smart Glasses are used to enhance visitors’ experience. To test CHATS, a case study was implemented and evaluated. Full article
Show Figures

Figure 1

Article
Enriching Mobile Learning Software with Interactive Activities and Motivational Feedback for Advancing Users’ High-Level Cognitive Skills
Computers 2022, 11(2), 18; https://doi.org/10.3390/computers11020018 - 25 Jan 2022
Viewed by 880
Abstract
Mobile learning is a promising form of digital education to access learning content through modern handheld devices. Through mobile learning, students can learn using smartphones, connected to the Internet, without having restrictions posed by time and place. However, such environments should be enriched [...] Read more.
Mobile learning is a promising form of digital education to access learning content through modern handheld devices. Through mobile learning, students can learn using smartphones, connected to the Internet, without having restrictions posed by time and place. However, such environments should be enriched with sophisticated techniques so that the learners can achieve their learning goals and have an optimized learning experience. To this direction, in this paper, presents a mobile learning software which delivers interactive activities and motivational feedback to learners with the aim of advancing their higher level cognitive skills. In more detail, the mobile application employs two theories, namely Bloom’s taxonomy and the taxonomy of intrinsic motivations by Malone and Lepper. Bloom’s taxonomy is used for the design of interactive activities that belong to varying levels of complexity, i.e., remembering, understanding, applying, analyzing, evaluating, and creating. Concerning motivational feedback, the taxonomy of intrinsic motivations by Malone and Lepper is used, which identifies four major factors, namely challenge, curiosity, control, and fantasy, and renders the learning environment intrinsically motivating. As a testbed for our research, the presented mobile learning system was designed for the teaching of a primary school course; however, the incorporated taxonomies could be adapted to the tutoring of any course. The mobile application was evaluated by school students with very promising results. Full article
(This article belongs to the Special Issue Present and Future of E-Learning Technologies)
Show Figures

Figure 1

Editorial
Acknowledgment to Reviewers of Computers in 2021
Computers 2022, 11(2), 17; https://doi.org/10.3390/computers11020017 - 25 Jan 2022
Viewed by 657
Abstract
Rigorous peer-reviews are the basis of high-quality academic publishing [...] Full article
Article
Adaptive Contextual Risk-Based Model to Tackle Confidentiality-Based Attacks in Fog-IoT Paradigm
Computers 2022, 11(2), 16; https://doi.org/10.3390/computers11020016 - 24 Jan 2022
Viewed by 832
Abstract
The Internet of Things (IoT) allows billions of physical objects to be connected to gather and exchange information to offer numerous applications. It has unsupported features such as low latency, location awareness, and geographic distribution that are important for a few IoT applications. [...] Read more.
The Internet of Things (IoT) allows billions of physical objects to be connected to gather and exchange information to offer numerous applications. It has unsupported features such as low latency, location awareness, and geographic distribution that are important for a few IoT applications. Fog computing is integrated into IoT to aid these features to increase computing, storage, and networking resources to the network edge. Unfortunately, it is faced with numerous security and privacy risks, raising severe concerns among users. Therefore, this research proposes a contextual risk-based access control model for Fog-IoT technology that considers real-time data information requests for IoT devices and gives dynamic feedback. The proposed model uses Fog-IoT environment features to estimate the security risk associated with each access request using device context, resource sensitivity, action severity, and risk history as inputs for the fuzzy risk model to compute the risk factor. Then, the proposed model uses a security agent in a fog node to provide adaptive features in which the device’s behaviour is monitored to detect any abnormal actions from authorised devices. The proposed model is then evaluated against the existing model to benchmark the results. The fuzzy-based risk assessment model with enhanced MQTT authentication protocol and adaptive security agent showed an accurate risk score for seven random scenarios tested compared to the simple risk score calculations. Full article
(This article belongs to the Special Issue Edge Computing for the IoT)
Show Figures

Figure 1

Article
A New Approach to Calibrating Functional Complexity Weight in Software Development Effort Estimation
Computers 2022, 11(2), 15; https://doi.org/10.3390/computers11020015 - 22 Jan 2022
Viewed by 952
Abstract
Function point analysis is a widely used metric in the software industry for development effort estimation. It was proposed in the 1970s, and then standardized by the International Function Point Users Group, as accepted by many organizations worldwide. While the software industry has [...] Read more.
Function point analysis is a widely used metric in the software industry for development effort estimation. It was proposed in the 1970s, and then standardized by the International Function Point Users Group, as accepted by many organizations worldwide. While the software industry has grown rapidly, the weight values specified for the standard function point counting have remained the same since its inception. Another problem is that software development in different industry sectors is peculiar, but basic rules apply to all. These raise important questions about the validity of weight values in practical applications. In this study, we propose an algorithm for calibrating the standardized functional complexity weights, aiming to estimate a more accurate software size that fits specific software applications, reflects software industry trends, and improves the effort estimation of software projects. The results show that the proposed algorithms improve effort estimation accuracy against the baseline method. Full article
Show Figures

Graphical abstract

Previous Issue
Next Issue
Back to TopTop