Next Issue
Volume 9, June
Previous Issue
Volume 8, December

Computers, Volume 9, Issue 1 (March 2020) – 22 articles

Cover Story (view full-size image): Modern Monte Carlo-based rendering systems still suffer from the computational complexity involved in the generation of noise-free images, making it challenging to synthesize interactive previews. We present a framework suitable for rendering such previews using a caching technique that builds upon a linkless octree. Our approach allows for a constant-time lookup to cache diffuse illumination at multiple hitpoints along the traced paths. Non-diffuse surfaces are also supported at interactive frame rates. Using benchmarks and a comparison against ground truth sequences, we show that our approach compares well to low-noise path-traced results. In this way, our caching technique provides a useful tool for global illumination previews and multiview rendering. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Select all
Export citation of selected articles as:
Open AccessEditorial
Introduction to the Special Issue “Applications in Self-Aware Computing Systems and their Evaluation”
Computers 2020, 9(1), 22; https://doi.org/10.3390/computers9010022 - 21 Mar 2020
Cited by 1 | Viewed by 2121
Abstract
The joint 1st Workshop on Evaluations and Measurements in Self-Aware Computing Systems (EMSAC 2019) and Workshop on Self-Aware Computing (SeAC) was held as part of the FAS* conference alliance in conjunction with the 16th IEEE International Conference on Autonomic Computing (ICAC) and the [...] Read more.
The joint 1st Workshop on Evaluations and Measurements in Self-Aware Computing Systems (EMSAC 2019) and Workshop on Self-Aware Computing (SeAC) was held as part of the FAS* conference alliance in conjunction with the 16th IEEE International Conference on Autonomic Computing (ICAC) and the 13th IEEE International Conference on Self-Adaptive and Self-Organizing Systems (SASO) in Umeå, Sweden on 20 June 2019. The goal of this one-day workshop was to bring together researchers and practitioners from academic environments and from the industry to share their solutions, ideas, visions, and doubts in self-aware computing systems in general and in the evaluation and measurements of such systems in particular. The workshop aimed to enable discussions, partnerships, and collaborations among the participants. This special issue follows the theme of the workshop. It contains extended versions of workshop presentations as well as additional contributions. Full article
(This article belongs to the Special Issue Applications in Self-Aware Computing Systems and their Evaluation)
Open AccessArticle
To Adapt or Not to Adapt: A Quantification Technique for Measuring an Expected Degree of Self-Adaptation
Computers 2020, 9(1), 21; https://doi.org/10.3390/computers9010021 - 18 Mar 2020
Cited by 6 | Viewed by 2146
Abstract
Self-adaptation and self-organization (SASO) have been introduced to the management of technical systems as an attempt to improve robustness and administrability. In particular, both mechanisms adapt the system’s structure and behavior in response to dynamics of the environment and internal or external disturbances. [...] Read more.
Self-adaptation and self-organization (SASO) have been introduced to the management of technical systems as an attempt to improve robustness and administrability. In particular, both mechanisms adapt the system’s structure and behavior in response to dynamics of the environment and internal or external disturbances. By now, adaptivity has been considered to be fully desirable. This position paper argues that too much adaptation conflicts with goals such as stability and user acceptance. Consequently, a kind of situation-dependent degree of adaptation is desired, which defines the amount and severity of tolerated adaptations in certain situations. As a first step into this direction, this position paper presents a quantification approach for measuring the current adaptation behavior based on generative, probabilistic models. The behavior of this method is analyzed in terms of three application scenarios: urban traffic control, the swidden farming model, and data communication protocols. Furthermore, we define a research roadmap in terms of six challenges for an overall measurement framework for SASO systems. Full article
(This article belongs to the Special Issue Applications in Self-Aware Computing Systems and their Evaluation)
Show Figures

Figure 1

Open AccessArticle
Virtual Forestry Generation: Evaluating Models for Tree Placement in Games
Computers 2020, 9(1), 20; https://doi.org/10.3390/computers9010020 - 13 Mar 2020
Viewed by 2120
Abstract
A handful of approaches have been previously proposed to generate procedurally virtual forestry for virtual worlds and computer games, including plant growth models and point distribution methods. However, there has been no evaluation to date which assesses how effective these algorithms are at [...] Read more.
A handful of approaches have been previously proposed to generate procedurally virtual forestry for virtual worlds and computer games, including plant growth models and point distribution methods. However, there has been no evaluation to date which assesses how effective these algorithms are at modelling real-world phenomena. In this paper, we tackle this issue by evaluating three algorithms used in the generation of virtual forests—a randomly uniform point distribution method (control), a plant competition model, and an iterative random point distribution technique. Our results show that a plant competition model generated more believable content when viewed from an aerial perspective. Interestingly, however, we also found that a randomly uniform point distribution method produced forestry which was rated higher in playability and photorealism, when viewed from a first-person perspective. We conclude that the objective of the game designer is important to consider when selecting an algorithm to generate forestry, as the algorithms produce forestry that is perceived differently. Full article
(This article belongs to the Special Issue Computer Graphics & Visual Computing (CGVC 2019))
Show Figures

Figure 1

Open AccessReview
A Technical Review on Development of an Advanced Electromechanical System
Computers 2020, 9(1), 19; https://doi.org/10.3390/computers9010019 - 10 Mar 2020
Cited by 1 | Viewed by 2097
Abstract
Development of an advanced electromechanical system requires a blend of technical expertise and knowledge, from electrical and mechanical engineering to software and system engineering. Engineers working in this field must be able to understand and analyze the interrelated electrical and mechanical components, as [...] Read more.
Development of an advanced electromechanical system requires a blend of technical expertise and knowledge, from electrical and mechanical engineering to software and system engineering. Engineers working in this field must be able to understand and analyze the interrelated electrical and mechanical components, as well as computer controlling programs, to communicate with each system part. In this paper, we aim to review the requirements and technical knowledge for development of these new technologies. Full article
Show Figures

Figure 1

Open AccessArticle
Interoperability Challenges in the Cybersecurity Information Sharing Ecosystem
Computers 2020, 9(1), 18; https://doi.org/10.3390/computers9010018 - 06 Mar 2020
Cited by 3 | Viewed by 3056
Abstract
Threat intelligence helps businesses and organisations make the right decisions in their fight against cyber threats, and strategically design their digital defences for an optimised and up-to-date security situation. Combined with advanced security analysis, threat intelligence helps reduce the time between the detection [...] Read more.
Threat intelligence helps businesses and organisations make the right decisions in their fight against cyber threats, and strategically design their digital defences for an optimised and up-to-date security situation. Combined with advanced security analysis, threat intelligence helps reduce the time between the detection of an attack and its containment. This is achieved by continuously providing information, accompanied by data, on existing and emerging cyber threats and vulnerabilities affecting corporate networks. This paper addresses challenges that organisations are bound to face when they decide to invest in effective and interoperable cybersecurity information sharing and categorises them in a layered model. Based on this, it provides an evaluation of existing sources that share cybersecurity information. The aim of this research is to help organisations improve their cyber threat information exchange capabilities, to enhance their security posture and be more prepared against emerging threats. Full article
Show Figures

Figure 1

Open AccessArticle
Hash-Based Hierarchical Caching and Layered Filtering for Interactive Previews in Global Illumination Rendering
Computers 2020, 9(1), 17; https://doi.org/10.3390/computers9010017 - 04 Mar 2020
Cited by 1 | Viewed by 2462
Abstract
Modern Monte-Carlo-based rendering systems still suffer from the computational complexity involved in the generation of noise-free images, making it challenging to synthesize interactive previews. We present a framework suited for rendering such previews of static scenes using a caching technique that builds upon [...] Read more.
Modern Monte-Carlo-based rendering systems still suffer from the computational complexity involved in the generation of noise-free images, making it challenging to synthesize interactive previews. We present a framework suited for rendering such previews of static scenes using a caching technique that builds upon a linkless octree. Our approach allows for memory-efficient storage and constant-time lookup to cache diffuse illumination at multiple hitpoints along the traced paths. Non-diffuse surfaces are dealt with in a hybrid way in order to reconstruct view-dependent illumination while maintaining interactive frame rates. By evaluating the visual fidelity against ground truth sequences and by benchmarking, we show that our approach compares well to low-noise path-traced results, but with a greatly reduced computational complexity, allowing for interactive frame rates. This way, our caching technique provides a useful tool for global illumination previews and multi-view rendering. Full article
(This article belongs to the Special Issue Computer Graphics & Visual Computing (CGVC 2019))
Show Figures

Figure 1

Open AccessArticle
Evaluation of Self-Healing Systems: An Analysis of the State-of-the-Art and Required Improvements
Computers 2020, 9(1), 16; https://doi.org/10.3390/computers9010016 - 27 Feb 2020
Cited by 3 | Viewed by 2154
Abstract
Evaluating the performance of self-adaptive systems is challenging due to their interactions with often highly dynamic environments. In the specific case of self-healing systems, the performance evaluations of self-healing approaches and their parameter tuning rely on the considered characteristics of failure occurrences and [...] Read more.
Evaluating the performance of self-adaptive systems is challenging due to their interactions with often highly dynamic environments. In the specific case of self-healing systems, the performance evaluations of self-healing approaches and their parameter tuning rely on the considered characteristics of failure occurrences and the resulting interactions with the self-healing actions. In this paper, we first study the state-of-the-art for evaluating the performances of self-healing systems by means of a systematic literature review. We provide a classification of different input types for such systems and analyse the limitations of each input type. A main finding is that the employed inputs are often not sophisticated regarding the considered characteristics for failure occurrences. To further study the impact of the identified limitations, we present experiments demonstrating that wrong assumptions regarding the characteristics of the failure occurrences can result in large performance prediction errors, disadvantageous design-time decisions concerning the selection of alternative self-healing approaches, and disadvantageous deployment-time decisions concerning parameter tuning. Furthermore, the experiments indicate that employing multiple alternative input characteristics can help with reducing the risk of premature disadvantageous design-time decisions. Full article
(This article belongs to the Special Issue Applications in Self-Aware Computing Systems and their Evaluation)
Show Figures

Figure 1

Open AccessArticle
Design, Fabrication, and Testing of an IoT Healthcare Cardiac Monitoring Device
Computers 2020, 9(1), 15; https://doi.org/10.3390/computers9010015 - 27 Feb 2020
Cited by 3 | Viewed by 2344
Abstract
The expansion of the concept of the Internet of Things (IoT), together with wireless sensor networks, has given rise to a wide range of IoT applications. This paper presents and describes the concept, theory of operation, and practical results of a Telecare-ECG (Electrocardiogram) [...] Read more.
The expansion of the concept of the Internet of Things (IoT), together with wireless sensor networks, has given rise to a wide range of IoT applications. This paper presents and describes the concept, theory of operation, and practical results of a Telecare-ECG (Electrocardiogram) Monitoring device, designed for the remote monitoring of out-of-hospital cardiac patients. ECG monitoring using the Telecare-ECG Monitor system ensures a better quality of life for patients and greater possibilities for the real-time monitoring and signaling of sporadic cardiac events, by recording instantaneous cardiac arrhythmias captured during certain activities or in the daily environment of the patient; furthermore, hospital resources are less impacted by this device than other devices. In accordance with the novelty and contribution of this paper to the field of ECG investigations, the results obtained in the analysis, testing, and validation of the Telecare-ECG Monitor system refer to the optimization of the functionality of the mobile ECG device under conditions that were as similar to reality as possible. Full article
Show Figures

Figure 1

Open AccessArticle
On Implementing Autonomic Systems with a Serverless Computing Approach: The Case of Self-Partitioning Cloud Caches
Computers 2020, 9(1), 14; https://doi.org/10.3390/computers9010014 - 26 Feb 2020
Cited by 1 | Viewed by 2188
Abstract
The research community has made significant advances towards realizing self-tuning cloud caches; notwithstanding, existing products still require manual expert tuning to maximize performance. Cloud (software) caches are built to swiftly serve requests; thus, avoiding costly functionality additions not directly related to the request-serving [...] Read more.
The research community has made significant advances towards realizing self-tuning cloud caches; notwithstanding, existing products still require manual expert tuning to maximize performance. Cloud (software) caches are built to swiftly serve requests; thus, avoiding costly functionality additions not directly related to the request-serving control path is critical. We show that serverless computing cloud services can be leveraged to solve the complex optimization problems that arise during self-tuning loops and can be used to optimize cloud caches for free. To illustrate that our approach is feasible and useful, we implement SPREDS (Self-Partitioning REDiS), a modified version of Redis that optimizes memory management in the multi-instance Redis scenario. A cost analysis shows that the serverless computing approach can lead to significant cost savings: The cost of running the controller as a serverless microservice is 0.85% of the cost of the always-on alternative. Through this case study, we make a strong case for implementing the controller of autonomic systems using a serverless computing approach. Full article
(This article belongs to the Special Issue Applications in Self-Aware Computing Systems and their Evaluation)
Show Figures

Figure 1

Open AccessArticle
On Granular Rough Computing: Handling Missing Values by Means of Homogeneous Granulation
Computers 2020, 9(1), 13; https://doi.org/10.3390/computers9010013 - 15 Feb 2020
Cited by 1 | Viewed by 2214
Abstract
This paper is a continuation of works based on a previously developed new granulation method—homogeneous granulation. The most important new feature of this method compared to our previous ones is that there is no need to estimate optimal parameters. Approximation parameters are selected [...] Read more.
This paper is a continuation of works based on a previously developed new granulation method—homogeneous granulation. The most important new feature of this method compared to our previous ones is that there is no need to estimate optimal parameters. Approximation parameters are selected dynamically depending on the degree of homogeneity of decision classes. This makes the method fast and simple, which is an undoubted advantage despite the fact that it gives a slightly lower level of approximation to our other techniques. In this particular article, we are presenting its performance in the process of missing values absorption. We test selected strategies on synthetically damaged data from the UCI repository. The added value is to investigate the specific performance of our new granulation technique in absorbing missing values. The effectiveness of their absorption in the granulation process has been confirmed in our experiments. Full article
Open AccessArticle
Self-Adaptive Data Processing to Improve SLOs for Dynamic IoT Workloads
Computers 2020, 9(1), 12; https://doi.org/10.3390/computers9010012 - 14 Feb 2020
Cited by 1 | Viewed by 2382
Abstract
Internet of Things (IoT) covers scenarios of cyber–physical interaction of smart devices with humans and the environment and, such as applications in smart city, smart manufacturing, predictive maintenance, and smart home. Traditional scenarios are quite static in the sense that the amount of [...] Read more.
Internet of Things (IoT) covers scenarios of cyber–physical interaction of smart devices with humans and the environment and, such as applications in smart city, smart manufacturing, predictive maintenance, and smart home. Traditional scenarios are quite static in the sense that the amount of supported end nodes, as well as the frequency and volume of observations transmitted, does not change much over time. The paper addresses the challenge of adapting the capacity of the data processing part of IoT pipeline in response to dynamic workloads for centralized IoT scenarios where the quality of user experience matters, e.g., interactivity and media streaming as well as the predictive maintenance for multiple moving vehicles, centralized analytics for wearable devices and smartphones. The self-adaptation mechanism for data processing IoT infrastructure deployed in the cloud is horizontal autoscaling. In this paper we propose augmentations to the computation schemes of data processing component’s desired replicas count from the previous work; these augmentations aim to repurpose original sets of metrics to tackle the task of SLO violations minimization for dynamic workloads instead of minimizing the cost of deployment in terms of instance seconds. The cornerstone proposed augmentation that underpins all the other ones is the adaptation of the desired replicas computation scheme to each scaling direction (scale-in and scale-out) separately. All the proposed augmentations were implemented in the standalone self-adaptive agent acting alongside Kubernetes’ HPA such that limitations of timely acquisition of the monitoring data for scaling are mitigated. Evaluation and comparison with the previous work show improvement in service level achieved, e.g., latency SLO violations were reduced from 2.87% to 1.70% in case of the forecasted message queue length-based replicas count computation used both for scale-in and scale-out, but at the same time higher cost of the scaled data processor deployment is observed. Full article
(This article belongs to the Special Issue Applications in Self-Aware Computing Systems and their Evaluation)
Show Figures

Figure 1

Open AccessArticle
Modeling Bimodal Social Networks Subject to the Recommendation with the Cold Start User-Item Model
Computers 2020, 9(1), 11; https://doi.org/10.3390/computers9010011 - 12 Feb 2020
Viewed by 2110
Abstract
This paper describes the modeling of social networks subject to a recommendation. The Cold Start User-Item Model (CSUIM) of a bipartite graph is considered, which simulates bipartite graph growth based on several parameters. An algorithm is proposed to compute parameters of this model [...] Read more.
This paper describes the modeling of social networks subject to a recommendation. The Cold Start User-Item Model (CSUIM) of a bipartite graph is considered, which simulates bipartite graph growth based on several parameters. An algorithm is proposed to compute parameters of this model with desired properties. The primary desired property is that the generated graph has similar graph metrics. The next is a change in our graph growth process due to recommendations. The meaning of CSUI model parameters in the recommendation process is described. We make several simulations generating networks from the CSUI model to verify theoretical properties. Also, proposed methods are tested on real-life networks. We prove that the CSUIM model of bipartite graphs is very flexible and can be applied to many different problems. We also show that the parameters of this model can be easily obtained from an unknown bipartite graph. Full article
Show Figures

Figure 1

Open AccessArticle
A Taxonomy of Techniques for SLO Failure Prediction in Software Systems
Computers 2020, 9(1), 10; https://doi.org/10.3390/computers9010010 - 11 Feb 2020
Cited by 2 | Viewed by 2395
Abstract
Failure prediction is an important aspect of self-aware computing systems. Therefore, a multitude of different approaches has been proposed in the literature over the past few years. In this work, we propose a taxonomy for organizing works focusing on the prediction of Service [...] Read more.
Failure prediction is an important aspect of self-aware computing systems. Therefore, a multitude of different approaches has been proposed in the literature over the past few years. In this work, we propose a taxonomy for organizing works focusing on the prediction of Service Level Objective (SLO) failures. Our taxonomy classifies related work along the dimensions of the prediction target (e.g., anomaly detection, performance prediction, or failure prediction), the time horizon (e.g., detection or prediction, online or offline application), and the applied modeling type (e.g., time series forecasting, machine learning, or queueing theory). The classification is derived based on a systematic mapping of relevant papers in the area. Additionally, we give an overview of different techniques in each sub-group and address remaining challenges in order to guide future research. Full article
(This article belongs to the Special Issue Applications in Self-Aware Computing Systems and their Evaluation)
Show Figures

Figure 1

Open AccessArticle
Leveraging Blockchain Technology to Break the Cloud Computing Market Monopoly
Computers 2020, 9(1), 9; https://doi.org/10.3390/computers9010009 - 10 Feb 2020
Cited by 1 | Viewed by 2448
Abstract
Cloud computing offerings traditionally originate from a handful of large and well established providers, which monopolize the market, preventing small players and individuals from having a share. As a result, the few, blindly and perforce trusted entities define the prices and manage to [...] Read more.
Cloud computing offerings traditionally originate from a handful of large and well established providers, which monopolize the market, preventing small players and individuals from having a share. As a result, the few, blindly and perforce trusted entities define the prices and manage to gain a significant competitive advantage by exploiting the knowledge derived by users’ data and computations. To tackle this monopoly and empower the democratization and full decentralization of the cloud computing market, we present CloudAgora, a platform that enables any potential resource provider, ranging from individuals to large companies, to monetize idle resources competing on equal terms, and allows any cloud consumer to enjoy access to low-cost storage and computation without having to trust any central authority. The key enabler of the platform is Blockchain technology, which is used to record commitment policies through the use of smart contracts, publicly verify off-chain services, both storage and computation related, and trigger automatic micropayments. On one hand, cloud consumers have the chance to request storage or compute resources, upload data, and outsource task processing over remote, fully distributed infrastructures. Although such infrastructures cannot be a priori trusted, CloudAgora offers mechanisms to ensure the verifiable validity of the outsourced storage and computation, discourage potential providers from behaving maliciously, and incentivize participants to play fair. On the other hand, providers are able to participate in auctions, placing bids for storage or computation tasks, serve requests, and offer validity proofs upon request. Our prototype is built as a Dapp on top of Ethereum and is available as an open source project. Full article
(This article belongs to the Special Issue Blockchain-Based Systems)
Show Figures

Figure 1

Open AccessArticle
Improving Internet of Things (IoT) Security with Software-Defined Networking (SDN)
Computers 2020, 9(1), 8; https://doi.org/10.3390/computers9010008 - 07 Feb 2020
Cited by 12 | Viewed by 4319
Abstract
There has been an increase in the usage of Internet of Things (IoT), which has recently become a rising area of interest as it is being extensively used for numerous applications and devices such as wireless sensors, medical devices, sensitive home sensors, and [...] Read more.
There has been an increase in the usage of Internet of Things (IoT), which has recently become a rising area of interest as it is being extensively used for numerous applications and devices such as wireless sensors, medical devices, sensitive home sensors, and other related IoT devices. Due to the demand to rapidly release new IoT products in the market, security aspects are often overlooked as it takes time to investigate all the possible vulnerabilities. Since IoT devices are internet-based and include sensitive and confidential information, security concerns have been raised and several researchers are exploring methods to improve the security among these types of devices. Software defined networking (SDN) is a promising computer network technology which introduces a central program named ‘SDN Controller’ that allows overall control of the network. Hence, using SDN is an obvious solution to improve IoT networking performance and overcome shortcomings that currently exist. In this paper, we (i) present a system model to effectively use SDN with IoT networks; (ii) present a solution for mitigating man-in-the-middle attacks against IoT that can only use HTTP, which is a critical attack that is hard to defend; and (iii) implement the proposed system model using Raspberry Pi, Kodi Media Center, and Openflow Protocol. Our system implementation and evaluations show that the proposed technique is more resilient to cyber-attacks. Full article
(This article belongs to the Special Issue IoT: Security, Privacy and Best Practices)
Show Figures

Figure 1

Open AccessArticle
Towards Self-Aware Multirotor Formations
Computers 2020, 9(1), 7; https://doi.org/10.3390/computers9010007 - 07 Feb 2020
Cited by 1 | Viewed by 2309
Abstract
In the present day, unmanned aerial vehicles become seemingly more popular every year, but, without regulation of the increasing number of these vehicles, the air space could become chaotic and uncontrollable. In this work, a framework is proposed to combine self-aware computing with [...] Read more.
In the present day, unmanned aerial vehicles become seemingly more popular every year, but, without regulation of the increasing number of these vehicles, the air space could become chaotic and uncontrollable. In this work, a framework is proposed to combine self-aware computing with multirotor formations to address this problem. The self-awareness is envisioned to improve the dynamic behavior of multirotors. The formation scheme that is implemented is called platooning, which arranges vehicles in a string behind the lead vehicle and is proposed to bring order into chaotic air space. Since multirotors define a general category of unmanned aerial vehicles, the focus of this thesis are quadcopters, platforms with four rotors. A modification for the LRA-M self-awareness loop is proposed and named Platooning Awareness. The implemented framework is able to offer two flight modes that enable waypoint following and the self-awareness module to find a path through scenarios, where obstacles are present on the way, onto a goal position. The evaluation of this work shows that the proposed framework is able to use self-awareness to learn about its environment, avoid obstacles, and can successfully move a platoon of drones through multiple scenarios. Full article
(This article belongs to the Special Issue Applications in Self-Aware Computing Systems and their Evaluation)
Show Figures

Figure 1

Open AccessArticle
A Computer Vision System for the Automatic Classification of Five Varieties of Tree Leaf Images
Computers 2020, 9(1), 6; https://doi.org/10.3390/computers9010006 - 28 Jan 2020
Cited by 2 | Viewed by 2530
Abstract
A computer vision system for automatic recognition and classification of five varieties of plant leaves under controlled laboratory imaging conditions, comprising: 1–Cydonia oblonga (quince), 2–Eucalyptus camaldulensis dehn (river red gum), 3–Malus pumila (apple), 4–Pistacia atlantica (mt. Atlas mastic tree) [...] Read more.
A computer vision system for automatic recognition and classification of five varieties of plant leaves under controlled laboratory imaging conditions, comprising: 1–Cydonia oblonga (quince), 2–Eucalyptus camaldulensis dehn (river red gum), 3–Malus pumila (apple), 4–Pistacia atlantica (mt. Atlas mastic tree) and 5–Prunus armeniaca (apricot), is proposed. 516 tree leaves images were taken and 285 features computed from each object including shape features, color features, texture features based on the gray level co-occurrence matrix, texture descriptors based on histogram and moment invariants. Seven discriminant features were selected and input for classification purposes using three classifiers: hybrid artificial neural network–ant bee colony (ANN–ABC), hybrid artificial neural network–biogeography based optimization (ANN–BBO) and Fisher linear discriminant analysis (LDA). Mean correct classification rates (CCR), resulted in 94.04%, 89.23%, and 93.99%, for hybrid ANN–ABC; hybrid ANN–BBO; and LDA classifiers, respectively. Best classifier mean area under curve (AUC), mean sensitivity, and mean specificity, were computed for the five tree varieties under study, resulting in: 1–Cydonia oblonga (quince) 0.991 (ANN–ABC), 95.89% (ANN–ABC), 95.91% (ANN–ABC); 2–Eucalyptus camaldulensis dehn (river red gum) 1.00 (LDA), 100% (LDA), 100% (LDA); 3–Malus pumila (apple) 0.996 (LDA), 96.63% (LDA), 94.99% (LDA); 4–Pistacia atlantica (mt. Atlas mastic tree) 0.979 (LDA), 91.71% (LDA), 82.57% (LDA); and 5–Prunus armeniaca (apricot) 0.994 (LDA), 88.67% (LDA), 94.65% (LDA), respectively. Full article
Show Figures

Graphical abstract

Open AccessEditorial
Acknowledgement to Reviewers of Computers in 2019
Computers 2020, 9(1), 5; https://doi.org/10.3390/computers9010005 - 20 Jan 2020
Viewed by 2135
Abstract
The editorial team greatly appreciates the reviewers who have dedicated their considerable time and expertise to the journal’s rigorous editorial process over the past 12 months, regardless of whether the papers are finally published or not [...] Full article
Open AccessArticle
A Novel Security Protocol for Wireless Sensor Networks with Cooperative Communication
Computers 2020, 9(1), 4; https://doi.org/10.3390/computers9010004 - 20 Jan 2020
Cited by 6 | Viewed by 2726
Abstract
This paper builds upon the foundation and clarifies specifications for a necessary security protocol in Wireless Sensor Networks (WSNs) with cooperative communications. It is designed to enhance performance and resiliency against cyber-attacks. Recent literature has shown that developing a WSN with Cooperative Communication [...] Read more.
This paper builds upon the foundation and clarifies specifications for a necessary security protocol in Wireless Sensor Networks (WSNs) with cooperative communications. It is designed to enhance performance and resiliency against cyber-attacks. Recent literature has shown that developing a WSN with Cooperative Communication greatly increases the performance of the network, but also exposes new vulnerabilities. The technique operates by transmitting packets of data to neighboring relay nodes in a cooperative fashion to reach the destination. In this paper, we consider security issues in WSNs with cooperative communication on each layer of the OSI model: physical layer, data link layer, network layer, service (topology) layer, and application layer. For each layer, we clarify the main task, enumerate the main attacks and threats, specify the primary security approaches and techniques (if any), and discuss possible new attacks and problems that may arise with the use of cooperative communications. Furthermore, we show for some attacks (e.g., jamming, packet dropping, and wormhole) that using cooperative communication improves the network resiliency and reliability. Finally, we propose a security protocol that addresses many of these shortcomings, while outlining the remaining issues that need further work and research. Full article
(This article belongs to the Special Issue IoT: Security, Privacy and Best Practices)
Show Figures

Figure 1

Open AccessArticle
Dynamic Boundary of P-Set and Intelligent Acquisition for Two Types of Information Fusion
Computers 2020, 9(1), 3; https://doi.org/10.3390/computers9010003 - 16 Jan 2020
Cited by 1 | Viewed by 2256
Abstract
The development of information technology brings the challenge of data redundancy and data shortage to information fusion. Based on the dynamic boundary characteristics of p-set, this paper analyzes the structure and generation of p-augmented matrix, and then analyzes the dynamic generation of information [...] Read more.
The development of information technology brings the challenge of data redundancy and data shortage to information fusion. Based on the dynamic boundary characteristics of p-set, this paper analyzes the structure and generation of p-augmented matrix, and then analyzes the dynamic generation of information equivalence class, and then proposes an intelligent acquisition algorithm of information equivalence class based on matrix reasoning. In addition, this paper analyzes two types of information fusion, namely information redundancy fusion and information supplement fusion. Then, the relationship among redundant information fusion, supplementary information fusion, and information equivalence classes is analyzed. Finally, this paper presents the application of intelligent acquisition of information equivalence class in information retrieval. Full article
(This article belongs to the Special Issue Selected Papers from IIKII 2019 Conferences in Computers)
Show Figures

Figure 1

Open AccessReview
DVFS and Its Architectural Simulation Models for Improving Energy Efficiency of Complex Embedded Systems in Early Design Phase
Computers 2020, 9(1), 2; https://doi.org/10.3390/computers9010002 - 07 Jan 2020
Cited by 2 | Viewed by 2930
Abstract
Dealing with resource constraints is an inevitable feature of embedded systems. Power and performance are the main concerns beside others. Pre-silicon analysis of power and performance in today’s complex embedded designs is a big challenge. Although RTL (Register-Transfer Level) models are more precise [...] Read more.
Dealing with resource constraints is an inevitable feature of embedded systems. Power and performance are the main concerns beside others. Pre-silicon analysis of power and performance in today’s complex embedded designs is a big challenge. Although RTL (Register-Transfer Level) models are more precise and reliable, system-level modeling enables the power and performance analysis of complex and dense designs in the early design phase. Virtual prototypes of systems prepared through architectural simulation provide a means of evaluating non-existing systems with more flexibility and minimum cost. Efficient interplay between power and performance is a key feature within virtual platforms. This article focuses on dynamic voltage and frequency scaling (DVFS), which is a well-known system-level low-power design technique together with its more efficient implementations modeled through architectural simulation. With advent of new computing paradigms and modern application domains with strict resource demands, DVFS and its efficient hardware-managed solutions get even more highlighted. This is mainly because they can react faster to resource demands and thus reduce induced overhead. To that end, they entail an effective collaboration between software and hardware. A case review in the end wraps up the discussed topics. Full article
Show Figures

Figure 1

Open AccessArticle
A Privacy Preserving Cloud-Based K-NN Search Scheme with Lightweight User Loads
Computers 2020, 9(1), 1; https://doi.org/10.3390/computers9010001 - 01 Jan 2020
Cited by 2 | Viewed by 2678
Abstract
With the growing popularity of cloud computing, it is convenient for data owners to outsource their data to a cloud server. By utilizing the massive storage and computational resources in cloud, data owners can also provide a platform for users to make query [...] Read more.
With the growing popularity of cloud computing, it is convenient for data owners to outsource their data to a cloud server. By utilizing the massive storage and computational resources in cloud, data owners can also provide a platform for users to make query requests. However, due to the privacy concerns, sensitive data should be encrypted before outsourcing. In this work, a novel privacy preserving K-nearest neighbor (K-NN) search scheme over the encrypted outsourced cloud dataset is proposed. The problem is about letting the cloud server find K nearest points with respect to an encrypted query on the encrypted dataset, which was outsourced by data owners, and return the searched results to the querying user. Comparing with other existing methods, our approach leverages the resources of the cloud more by shifting most of the required computational loads, from data owners and query users, to the cloud server. In addition, there is no need for data owners to share their secret key with others. In a nutshell, in the proposed scheme, data points and user queries are encrypted attribute-wise and the entire search algorithm is performed in the encrypted domain; therefore, our approach not only preserves the data privacy and query privacy but also hides the data access pattern from the cloud server. Moreover, by using a tree structure, the proposed scheme could accomplish query requests in sub-liner time, according to our performance analysis. Finally, experimental results demonstrate the practicability and the efficiency of our method. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop