Special Issue "Feature Paper in Computers"

A special issue of Computers (ISSN 2073-431X).

Deadline for manuscript submissions: closed (31 December 2020) | Viewed by 57618

Special Issue Editor

Prof. Dr. Stefan Gumhold
E-Mail Website
Guest Editor
Professorship for Computer Graphics and Visualization, Technische Universität Dresden, 01062 Dresden
Interests: scientific visualization; visual analysis; geometry processing; 3D acquisition; scene understanding

Special Issue Information

Dear Colleagues,

This is a Special Issue formed of high-quality papers in Open Access form by Editorial Board Members, or those invited by the Editorial Office and the Editor-in-Chief in Computer Sciences.

Prof. Dr. Stefan Gumhold
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Computers is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (36 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

Article
Release Planning Patterns for the Automotive Domain
Computers 2022, 11(6), 89; https://doi.org/10.3390/computers11060089 - 30 May 2022
Viewed by 451
Abstract
Context: Today’s vehicle development is focusing more and more on handling the vast amount of software and hardware inside the vehicle. The resulting planning and development of the software especially confronts original equipment manufacturers (OEMs) with major challenges that have to be mastered. [...] Read more.
Context: Today’s vehicle development is focusing more and more on handling the vast amount of software and hardware inside the vehicle. The resulting planning and development of the software especially confronts original equipment manufacturers (OEMs) with major challenges that have to be mastered. This makes effective and efficient release planning that provides the development scope in the required quality even more important. In addition, the OEMs have to deal with boundary conditions given by the OEM itself and the standards as well as legislation the software and hardware have to conform to. Release planning is a key activity for successfully developing vehicles. Objective: The aim of this work is to introduce release planning patterns to simplify the release planning of software and hardware installed in a vehicle. Method: We followed a pattern identification process that was conducted at Dr. Ing. h. c. F. Porsche AG. Results: We introduce eight release planning patterns, which both address the fixed boundary conditions and structure the actual planning content of a release plan. The patterns address an automotive context and have been developed from a hardware and software point of view based on two examples from the case company. Conclusions: The presented patterns address recurring problems in an automotive context and are based on real life examples. The gathered knowledge can be used for further application in practice and related domains. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

Article
Comparison of Statistical and Machine-Learning Models on Road Traffic Accident Severity Classification
Computers 2022, 11(5), 80; https://doi.org/10.3390/computers11050080 - 16 May 2022
Viewed by 651
Abstract
Portugal has the sixth highest road fatality rate among European Union members. This is a problem of different dimensions with serious consequences in people’s lives. This study analyses daily data from police and government authorities on road traffic accidents that occurred between 2016 [...] Read more.
Portugal has the sixth highest road fatality rate among European Union members. This is a problem of different dimensions with serious consequences in people’s lives. This study analyses daily data from police and government authorities on road traffic accidents that occurred between 2016 and 2019 in a district of Portugal. This paper looks for the determinants that contribute to the existence of victims in road traffic accidents, as well as the determinants for fatalities and/or serious injuries in accidents with victims. We use logistic regression models, and the results are compared to the machine-learning model results. For the severity model, where the response variable indicates whether only property damage or casualties resulted in the traffic accident, we used a large sample with a small imbalance. For the serious injuries model, where the response variable indicates whether or not there were victims with serious injuries and/or fatalities in the traffic accident with victims, we used a small sample with very imbalanced data. Empirical analysis supports the conclusion that, with a small sample of imbalanced data, machine-learning models generally do not perform better than statistical models; however, they perform similarly when the sample is large and has a small imbalance. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Graphical abstract

Article
Distributed Attack Deployment Capability for Modern Automated Penetration Testing
Computers 2022, 11(3), 33; https://doi.org/10.3390/computers11030033 - 23 Feb 2022
Viewed by 1248
Abstract
Cybersecurity is an ever-changing landscape. The threats of the future are hard to predict and even harder to prepare for. This paper presents work designed to prepare for the cybersecurity landscape of tomorrow by creating a key support capability for an autonomous cybersecurity [...] Read more.
Cybersecurity is an ever-changing landscape. The threats of the future are hard to predict and even harder to prepare for. This paper presents work designed to prepare for the cybersecurity landscape of tomorrow by creating a key support capability for an autonomous cybersecurity testing system. This system is designed to test and prepare critical infrastructure for what the future of cyberattacks looks like. It proposes a new type of attack framework that provides precise and granular attack control and higher perception within a set of infected infrastructure. The proposed attack framework is intelligent, supports the fetching and execution of arbitrary attacks, and has a small memory and network footprint. This framework facilitates autonomous rapid penetration testing as well as the evaluation of where detection systems and procedures are underdeveloped and require further improvement in preparation for rapid autonomous cyber-attacks. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

Article
Approximator: A Software Tool for Automatic Generation of Approximate Arithmetic Circuits
Computers 2022, 11(1), 11; https://doi.org/10.3390/computers11010011 - 08 Jan 2022
Viewed by 788
Abstract
Approximate arithmetic circuits are an attractive alternative to accurate arithmetic circuits because they have significantly reduced delay, area, and power, albeit at the cost of some loss in accuracy. By keeping errors due to approximate computation within acceptable limits, approximate arithmetic circuits can [...] Read more.
Approximate arithmetic circuits are an attractive alternative to accurate arithmetic circuits because they have significantly reduced delay, area, and power, albeit at the cost of some loss in accuracy. By keeping errors due to approximate computation within acceptable limits, approximate arithmetic circuits can be used for various practical applications such as digital signal processing, digital filtering, low power graphics processing, neuromorphic computing, hardware realization of neural networks for artificial intelligence and machine learning etc. The degree of approximation that can be incorporated into an approximate arithmetic circuit tends to vary depending on the error resiliency of the target application. Given this, the manual coding of approximate arithmetic circuits corresponding to different degrees of approximation in a hardware description language (HDL) may be a cumbersome and a time-consuming process—more so when the circuit is big. Therefore, a software tool that can automatically generate approximate arithmetic circuits of any size corresponding to a desired accuracy would not only aid the design flow but also help to improve a designer’s productivity by speeding up the circuit/system development. In this context, this paper presents ‘Approximator’, which is a software tool developed to automatically generate approximate arithmetic circuits based on a user’s specification. Approximator can automatically generate Verilog HDL codes of approximate adders and multipliers of any size based on the novel approximate arithmetic circuit architectures proposed by us. The Verilog HDL codes output by Approximator can be used for synthesis in an FPGA or ASIC (standard cell based) design environment. Additionally, the tool can perform error and accuracy analyses of approximate arithmetic circuits. The salient features of the tool are illustrated through some example screenshots captured during different stages of the tool use. Approximator has been made open-access on GitHub for the benefit of the research community, and the tool documentation is provided for the user’s reference. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

Article
Markerless Dog Pose Recognition in the Wild Using ResNet Deep Learning Model
Computers 2022, 11(1), 2; https://doi.org/10.3390/computers11010002 - 24 Dec 2021
Cited by 3 | Viewed by 1071
Abstract
The analysis and perception of behavior has usually been a crucial task for researchers. The goal of this paper is to address the problem of recognition of animal poses, which has numerous applications in zoology, ecology, biology, and entertainment. We propose a methodology [...] Read more.
The analysis and perception of behavior has usually been a crucial task for researchers. The goal of this paper is to address the problem of recognition of animal poses, which has numerous applications in zoology, ecology, biology, and entertainment. We propose a methodology to recognize dog poses. The methodology includes the extraction of frames for labeling from videos and deep convolutional neural network (CNN) training for pose recognition. We employ a semi-supervised deep learning model of reinforcement. During training, we used a combination of restricted labeled data and a large amount of unlabeled data. Sequential CNN is also used for feature localization and to find the canine’s motions and posture for spatio-temporal analysis. To detect the canine’s features, we employ image frames to locate the annotations and estimate the dog posture. As a result of this process, we avoid starting from scratch with the feature model and reduce the need for a large dataset. We present the results of experiments on a dataset of more than 5000 images of dogs in different poses. We demonstrated the effectiveness of the proposed methodology for images of canine animals in various poses and behavior. The methodology implemented as a mobile app that can be used for animal tracking. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

Article
Automated Paraphrase Quality Assessment Using Language Models and Transfer Learning
Computers 2021, 10(12), 166; https://doi.org/10.3390/computers10120166 - 06 Dec 2021
Viewed by 839
Abstract
Learning to paraphrase supports both writing ability and reading comprehension, particularly for less skilled learners. As such, educational tools that integrate automated evaluations of paraphrases can be used to provide timely feedback to enhance learner paraphrasing skills more efficiently and effectively. Paraphrase identification [...] Read more.
Learning to paraphrase supports both writing ability and reading comprehension, particularly for less skilled learners. As such, educational tools that integrate automated evaluations of paraphrases can be used to provide timely feedback to enhance learner paraphrasing skills more efficiently and effectively. Paraphrase identification is a popular NLP classification task that involves establishing whether two sentences share a similar meaning. Paraphrase quality assessment is a slightly more complex task, in which pairs of sentences are evaluated in-depth across multiple dimensions. In this study, we focus on four dimensions: lexical, syntactical, semantic, and overall quality. Our study introduces and evaluates various machine learning models using handcrafted features combined with Extra Trees, Siamese neural networks using BiLSTM RNNs, and pretrained BERT-based models, together with transfer learning from a larger general paraphrase corpus, to estimate the quality of paraphrases across the four dimensions. Two datasets are considered for the tasks involving paraphrase quality: ULPC (User Language Paraphrase Corpus) containing 1998 paraphrases and a smaller dataset with 115 paraphrases based on children’s inputs. The paraphrase identification dataset used for the transfer learning task is the MSRP dataset (Microsoft Research Paraphrase Corpus) containing 5801 paraphrases. On the ULPC dataset, our BERT model improves upon the previous baseline by at least 0.1 in F1-score across the four dimensions. When using fine-tuning from ULPC for the children dataset, both the BERT and Siamese neural network models improve upon their original scores by at least 0.11 F1-score. The results of these experiments suggest that transfer learning using generic paraphrase identification datasets can be successful, while at the same time obtaining comparable results in fewer epochs. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

Article
Browsers’ Private Mode: Is It What We Were Promised?
Computers 2021, 10(12), 165; https://doi.org/10.3390/computers10120165 - 02 Dec 2021
Viewed by 1450
Abstract
Web browsers are one of the most used applications on every computational device in our days. Hence, they play a pivotal role in any forensic investigation and help determine if nefarious or suspicious activity has occurred on that device. Our study investigates the [...] Read more.
Web browsers are one of the most used applications on every computational device in our days. Hence, they play a pivotal role in any forensic investigation and help determine if nefarious or suspicious activity has occurred on that device. Our study investigates the usage of private mode and browsing artefacts within four prevalent web browsers and is focused on analyzing both hard disk and random access memory. Forensic analysis on the target device showed that using private mode matched each of the web browser vendors’ claims, such as that browsing activity, search history, cookies and temporary files that are not saved in the device’s hard disks. However, in volatile memory analysis, a majority of artefacts within the test cases were retrieved. Hence, a malicious actor performing a similar approach could potentially retrieve sensitive information left behind on the device without the user’s consent. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

Article
Click Fraud in Digital Advertising: A Comprehensive Survey
Computers 2021, 10(12), 164; https://doi.org/10.3390/computers10120164 - 01 Dec 2021
Viewed by 1219
Abstract
Recent research has revealed an alarming prevalence of click fraud in online advertising systems. In this article, we present a comprehensive study on the usage and impact of bots in performing click fraud in the realm of digital advertising. Specifically, we first provide [...] Read more.
Recent research has revealed an alarming prevalence of click fraud in online advertising systems. In this article, we present a comprehensive study on the usage and impact of bots in performing click fraud in the realm of digital advertising. Specifically, we first provide an in-depth investigation of different known categories of Web-bots along with their malicious activities and associated threats. We then ask a series of questions to distinguish between the important behavioral characteristics of bots versus humans in conducting click fraud within modern-day ad platforms. Subsequently, we provide an overview of the current detection and threat mitigation strategies pertaining to click fraud as discussed in the literature, and we categorize the surveyed techniques based on which specific actors within a digital advertising system are most likely to deploy them. We also offer insights into some of the best-known real-world click bots and their respective ad fraud campaigns observed to date. According to our knowledge, this paper is the most comprehensive research study of its kind, as it examines the problem of click fraud both from a theoretical as well as practical perspective. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Graphical abstract

Article
Requirements Elicitation for an Assistance System for Complexity Management in Product Development of SMEs during COVID-19: A Case Study
Computers 2021, 10(11), 149; https://doi.org/10.3390/computers10110149 - 10 Nov 2021
Viewed by 971
Abstract
Technological progress, upcoming cyber-physical systems, and limited resources confront small and medium-sized enterprises (SMEs) with the challenge of complexity management in product development projects spanning over the entire product lifecycle. SMEs require a solution for documenting and analyzing the functional relationships between multiple [...] Read more.
Technological progress, upcoming cyber-physical systems, and limited resources confront small and medium-sized enterprises (SMEs) with the challenge of complexity management in product development projects spanning over the entire product lifecycle. SMEs require a solution for documenting and analyzing the functional relationships between multiple domains such as products, software, and processes. The German research project FuPEP “Funktionsorientiertes Komplexitätsmanagement in allen Phasen der Produktentstehung” aims to address this issue by developing an assistance system that supports product developers by visualizing functional relationships. This paper presents the methodology and results of the assistance system’s requirements elicitation with two SMEs. Conducting the elicitation during a global pandemic, we discuss its application using specific techniques in light of COVID-19. We model problems and their effects regarding complexity management in product development in a system dynamics model. The most important requirements and use cases elicited are presented, and the requirements elicitation methodology and results are discussed. Additionally, we present a multilayer software architecture design of the assistance system. Our case study suggests a relationship between fear of a missing project focus among project participants and the restriction of requirements elicitation techniques to those possible via web conferencing tools. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Graphical abstract

Article
Estimating Interpersonal Distance and Crowd Density with a Single-Edge Camera
Computers 2021, 10(11), 143; https://doi.org/10.3390/computers10110143 - 05 Nov 2021
Cited by 2 | Viewed by 601
Abstract
For public safety and physical security, currently more than a billion closed-circuit television (CCTV) cameras are in use around the world. Proliferation of artificial intelligence (AI) and machine/deep learning (M/DL) technologies have gained significant applications including crowd surveillance. The state-of-the-art distance and area [...] Read more.
For public safety and physical security, currently more than a billion closed-circuit television (CCTV) cameras are in use around the world. Proliferation of artificial intelligence (AI) and machine/deep learning (M/DL) technologies have gained significant applications including crowd surveillance. The state-of-the-art distance and area estimation algorithms either need multiple cameras or a reference object as a ground truth. It is an open question to obtain an estimation using a single camera without a scale reference. In this paper, we propose a novel solution called E-SEC, which estimates interpersonal distance between a pair of dynamic human objects, area occupied by a dynamic crowd, and density using a single edge camera. The E-SEC framework comprises edge CCTV cameras responsible for capturing a crowd on video frames leveraging a customized YOLOv3 model for human detection. E-SEC contributes an interpersonal distance estimation algorithm vital for monitoring the social distancing of a crowd, and an area estimation algorithm for dynamically determining an area occupied by a crowd with changing size and position. A unified output module generates the crowd size, interpersonal distances, social distancing violations, area, and density per every frame. Experimental results validate the accuracy and efficiency of E-SEC with a range of different video datasets. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

Article
Employee Attrition Prediction Using Deep Neural Networks
Computers 2021, 10(11), 141; https://doi.org/10.3390/computers10110141 - 03 Nov 2021
Viewed by 955
Abstract
Decision-making plays an essential role in the management and may represent the most important component in the planning process. Employee attrition is considered a well-known problem that needs the right decisions from the administration to preserve high qualified employees. Interestingly, artificial intelligence is [...] Read more.
Decision-making plays an essential role in the management and may represent the most important component in the planning process. Employee attrition is considered a well-known problem that needs the right decisions from the administration to preserve high qualified employees. Interestingly, artificial intelligence is utilized extensively as an efficient tool for predicting such a problem. The proposed work utilizes the deep learning technique along with some preprocessing steps to improve the prediction of employee attrition. Several factors lead to employee attrition. Such factors are analyzed to reveal their intercorrelation and to demonstrate the dominant ones. Our work was tested using the imbalanced dataset of IBM analytics, which contains 35 features for 1470 employees. To get realistic results, we derived a balanced version from the original one. Finally, cross-validation is implemented to evaluate our work precisely. Extensive experiments have been conducted to show the practical value of our work. The prediction accuracy using the original dataset is about 91%, whereas it is about 94% using a synthetic dataset. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

Article
A Novel Multi-Modality Image Simultaneous Denoising and Fusion Method Based on Sparse Representation
Computers 2021, 10(10), 129; https://doi.org/10.3390/computers10100129 - 13 Oct 2021
Cited by 1 | Viewed by 590
Abstract
Multi-modality image fusion applied to improve image quality has drawn great attention from researchers in recent years. However, noise is actually generated in images captured by different types of imaging sensors, which can seriously affect the performance of multi-modality image fusion. As the [...] Read more.
Multi-modality image fusion applied to improve image quality has drawn great attention from researchers in recent years. However, noise is actually generated in images captured by different types of imaging sensors, which can seriously affect the performance of multi-modality image fusion. As the fundamental method of noisy image fusion, source images are denoised first, and then the denoised images are fused. However, image denoising can decrease the sharpness of source images to affect the fusion performance. Additionally, denoising and fusion are processed in separate processing modes, which causes an increase in computation cost. To fuse noisy multi-modality image pairs accurately and efficiently, a multi-modality image simultaneous fusion and denoising method is proposed. In the proposed method, noisy source images are decomposed into cartoon and texture components. Cartoon-texture decomposition not only decomposes source images into detail and structure components for different image fusion schemes, but also isolates image noise from texture components. A Gaussian scale mixture (GSM) based sparse representation model is presented for the denoising and fusion of texture components. A spatial domain fusion rule is applied to cartoon components. The comparative experimental results confirm the proposed simultaneous image denoising and fusion method is superior to the state-of-the-art methods in terms of visual and quantitative evaluations. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Graphical abstract

Article
Classification of Contaminated Insulators Using k-Nearest Neighbors Based on Computer Vision
Computers 2021, 10(9), 112; https://doi.org/10.3390/computers10090112 - 09 Sep 2021
Cited by 3 | Viewed by 741
Abstract
Contamination on insulators may increase the surface conductivity of the insulator, and as a consequence, electrical discharges occur more frequently, which can lead to interruptions in a power supply. To maintain reliability in an electrical distribution power system, components that have lost their [...] Read more.
Contamination on insulators may increase the surface conductivity of the insulator, and as a consequence, electrical discharges occur more frequently, which can lead to interruptions in a power supply. To maintain reliability in an electrical distribution power system, components that have lost their insulating properties must be replaced. Identifying the components that need maintenance is a difficult task as there are several levels of contamination that are hard to notice during inspections. To improve the quality of inspections, this paper proposes using k-nearest neighbors (k-NN) to classify the levels of insulator contamination based on images of insulators at various levels of contamination simulated in the laboratory. Computer vision features such as mean, variance, asymmetry, kurtosis, energy, and entropy are used for training the k-NN. To assess the robustness of the proposed approach, a statistical analysis and a comparative assessment with well-consolidated algorithms such as decision tree, ensemble subspace, and support vector machine models are presented. The k-NN showed up to 85.17% accuracy using the k-fold cross-validation method, with an average accuracy higher than 82% for the multi-classification of contamination of insulators, being superior to the compared models. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Graphical abstract

Article
Assessment of Gradient Descent Trained Rule-Fact Network Expert System Multi-Path Training Technique Performance
Computers 2021, 10(8), 103; https://doi.org/10.3390/computers10080103 - 20 Aug 2021
Cited by 2 | Viewed by 677
Abstract
The use of gradient descent training to optimize the performance of a rule-fact network expert system via updating the network’s rule weightings was previously demonstrated. Along with this, four training techniques were proposed: two used a single path for optimization and two use [...] Read more.
The use of gradient descent training to optimize the performance of a rule-fact network expert system via updating the network’s rule weightings was previously demonstrated. Along with this, four training techniques were proposed: two used a single path for optimization and two use multiple paths. The performance of the single path techniques was previously evaluated under a variety of experimental conditions. The multiple path techniques, when compared, outperformed the single path ones; however, these techniques were not evaluated with different network types, training velocities or training levels. This paper considers the multi-path techniques under a similar variety of experimental conditions to the prior assessment of the single-path techniques and demonstrates their effectiveness under multiple operating conditions. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

Article
A Comparative Analysis of Semi-Supervised Learning in Detecting Burst Header Packet Flooding Attack in Optical Burst Switching Network
Computers 2021, 10(8), 95; https://doi.org/10.3390/computers10080095 - 04 Aug 2021
Viewed by 848
Abstract
This paper presents a comparative analysis of four semi-supervised machine learning (SSML) algorithms for detecting malicious nodes in an optical burst switching (OBS) network. The SSML approaches include a modified version of K-means clustering, a Gaussian mixture model (GMM), a classical self-training (ST) [...] Read more.
This paper presents a comparative analysis of four semi-supervised machine learning (SSML) algorithms for detecting malicious nodes in an optical burst switching (OBS) network. The SSML approaches include a modified version of K-means clustering, a Gaussian mixture model (GMM), a classical self-training (ST) model, and a modified version of self-training (MST) model. All the four approaches work in semi-supervised fashion, while the MST uses an ensemble of classifiers for the final decision making. SSML approaches are particularly useful when a limited number of labeled data is available for training and validation of the classification model. Manual labeling of a large dataset is complex and time consuming. It is even worse for the OBS network data. SSML can be used to leverage the unlabeled data for making a better prediction than using a smaller set of labelled data. We evaluated the performance of four SSML approaches for two (Behaving, Not-behaving), three (Behaving, Not-behaving, and Potentially Not-behaving), and four (No-Block, Block, NB- wait and NB-No-Block) class classifications using precision, recall, and F1 score. In case of the two-class classification, the K-means and GMM-based approaches performed better than the others. In case of the three-class classification, the K-means and the classical ST approaches performed better than the others. In case of the four-class classification, the MST showed the best performance. Finally, the SSML approaches were compared with two supervised learning (SL) based approaches. The comparison results showed that the SSML based approaches outperform when a smaller sized labeled data is available to train the classification models. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

Article
Towards Developing a Framework to Analyze the Qualities of the University Websites
Computers 2021, 10(5), 57; https://doi.org/10.3390/computers10050057 - 27 Apr 2021
Cited by 1 | Viewed by 1080
Abstract
The website of a university is considered to be a virtual gateway to provide primary resources to its stakeholders. It can play an indispensable role in disseminating information about a university to a variety of audience at a time. Thus, the quality of [...] Read more.
The website of a university is considered to be a virtual gateway to provide primary resources to its stakeholders. It can play an indispensable role in disseminating information about a university to a variety of audience at a time. Thus, the quality of an academic website requires special attention to fulfil the users’ need. This paper presents a multi-method approach of quality assessment of the academic websites, in the context of universities of Bangladesh. We developed an automated web-based tool that can evaluate any academic website based on three criteria, which are as follows: content of information, loading time and overall performance. Content of information contains many sub criteria, such as university vision and mission, faculty information, notice board and so on. This tool can also perform comparative analysis among several academic websites and generate a ranked list of these. To the best of our knowledge, this is the very first initiative to develop an automated tool for accessing academic website quality in context of Bangladesh. Beside this, we have conducted a questionnaire-based statistical evaluation among several universities to obtain the respective users’ feedback about their academic websites. Then, a ranked list is generated based on the survey result that is almost similar to the ranked list got from the University ranking systems. This validates the effectiveness of our developed tool in accessing academic website. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

Article
Improving Business Performance by Employing Virtualization Technology: A Case Study in the Financial Sector
Computers 2021, 10(4), 52; https://doi.org/10.3390/computers10040052 - 16 Apr 2021
Cited by 3 | Viewed by 1157
Abstract
The financial crisis of the last decade has left many financial institutions with limited personnel and equipment resources. Thus, the IT departments of these institutions are being asked to explore novel approaches to resolve these constraints in a cost-effective and efficient manner. The [...] Read more.
The financial crisis of the last decade has left many financial institutions with limited personnel and equipment resources. Thus, the IT departments of these institutions are being asked to explore novel approaches to resolve these constraints in a cost-effective and efficient manner. The goal of this paper is to measure the impact of modern enabling technologies, such as virtualization, in the process of replacing legacy infrastructures. This paper proposes an IT services upgrade plan approach for an organization by using modern technologies. For this purpose, research took place in an operating financial institution, which required a significant upgrade of both its service-level and its hardware infrastructure. A virtualization implementation and deployment assessment for the entire infrastructure was conducted and the resulting consolidated data are presented and analysed. The paper concludes with a five-year financial-based evaluation of the proposed approach with respect to the projection of expenditures, the return of investment and profitability. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

Article
Online Judging Platform Utilizing Dynamic Plagiarism Detection Facilities
Computers 2021, 10(4), 47; https://doi.org/10.3390/computers10040047 - 08 Apr 2021
Cited by 1 | Viewed by 1289
Abstract
A programming contest generally involves the host presenting a set of logical and mathematical problems to the contestants. The contestants are required to write computer programs that are capable of solving these problems. An online judge system is used to automate the judging [...] Read more.
A programming contest generally involves the host presenting a set of logical and mathematical problems to the contestants. The contestants are required to write computer programs that are capable of solving these problems. An online judge system is used to automate the judging procedure of the programs that are submitted by the users. Online judges are systems designed for the reliable evaluation of the source codes submitted by the users. Traditional online judging platforms are not ideally suitable for programming labs, as they do not support partial scoring and efficient detection of plagiarized codes. When considering this fact, in this paper, we present an online judging framework that is capable of automatic scoring of codes by detecting plagiarized contents and the level of accuracy of codes efficiently. Our system performs the detection of plagiarism by detecting fingerprints of programs and using the fingerprints to compare them instead of using the whole file. We used winnowing to select fingerprints among k-gram hash values of a source code, which was generated by the Rabin–Karp Algorithm. The proposed system is compared with the existing online judging platforms to show the superiority in terms of time efficiency, correctness, and feature availability. In addition, we evaluated our system by using large data sets and comparing the run time with MOSS, which is the widely used plagiarism detection technique. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

Article
Simulation and Analysis of Self-Replicating Robot Decision-Making Systems
Computers 2021, 10(1), 9; https://doi.org/10.3390/computers10010009 - 06 Jan 2021
Viewed by 1463
Abstract
Self-replicating robot systems (SRRSs) are a new prospective paradigm for robotic exploration. They can potentially facilitate lower mission costs and enhance mission capabilities by allowing some materials, which are needed for robotic system construction, to be collected in situ and used for robot [...] Read more.
Self-replicating robot systems (SRRSs) are a new prospective paradigm for robotic exploration. They can potentially facilitate lower mission costs and enhance mission capabilities by allowing some materials, which are needed for robotic system construction, to be collected in situ and used for robot fabrication. The use of a self-replicating robot system can potentially lower risk aversion, due to the ability to potentially replenish lost or damaged robots, and may increase the likelihood of mission success. This paper proposes and compares system configurations of an SRRS. A simulation system was designed and is used to model how an SRRS performs based on its system configuration, attributes, and operating environment. Experiments were conducted using this simulation and the results are presented. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

Article
EEG and Deep Learning Based Brain Cognitive Function Classification
Computers 2020, 9(4), 104; https://doi.org/10.3390/computers9040104 - 21 Dec 2020
Cited by 9 | Viewed by 1804
Abstract
Electroencephalogram signals are used to assess neurodegenerative diseases and develop sophisticated brain machine interfaces for rehabilitation and gaming. Most of the applications use only motor imagery or evoked potentials. Here, a deep learning network based on a sensory motor paradigm (auditory, olfactory, movement, [...] Read more.
Electroencephalogram signals are used to assess neurodegenerative diseases and develop sophisticated brain machine interfaces for rehabilitation and gaming. Most of the applications use only motor imagery or evoked potentials. Here, a deep learning network based on a sensory motor paradigm (auditory, olfactory, movement, and motor-imagery) that employs a subject-agnostic Bidirectional Long Short-Term Memory (BLSTM) Network is developed to assess cognitive functions and identify its relationship with brain signal features, which is hypothesized to consistently indicate cognitive decline. Testing occurred with healthy subjects of age 20–40, 40–60, and >60, and mildly cognitive impaired subjects. Auditory and olfactory stimuli were presented to the subjects and the subjects imagined and conducted movement of each arm during which Electroencephalogram (EEG)/Electromyogram (EMG) signals were recorded. A deep BLSTM Neural Network is trained with Principal Component features from evoked signals and assesses their corresponding pathways. Wavelet analysis is used to decompose evoked signals and calculate the band power of component frequency bands. This deep learning system performs better than conventional deep neural networks in detecting MCI. Most features studied peaked at the age range 40–60 and were lower for the MCI group than for any other group tested. Detection accuracy of left-hand motor imagery signals best indicated cognitive aging (p = 0.0012); here, the mean classification accuracy per age group declined from 91.93% to 81.64%, and is 69.53% for MCI subjects. Motor-imagery-evoked band power, particularly in gamma bands, best indicated (p = 0.007) cognitive aging. Although the classification accuracy of the potentials effectively distinguished cognitive aging from MCI (p < 0.05), followed by gamma-band power. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

Article
FuseVis: Interpreting Neural Networks for Image Fusion Using Per-Pixel Saliency Visualization
Computers 2020, 9(4), 98; https://doi.org/10.3390/computers9040098 - 10 Dec 2020
Cited by 2 | Viewed by 2281
Abstract
Image fusion helps in merging two or more images to construct a more informative single fused image. Recently, unsupervised learning-based convolutional neural networks (CNN) have been used for different types of image-fusion tasks such as medical image fusion, infrared-visible image fusion for autonomous [...] Read more.
Image fusion helps in merging two or more images to construct a more informative single fused image. Recently, unsupervised learning-based convolutional neural networks (CNN) have been used for different types of image-fusion tasks such as medical image fusion, infrared-visible image fusion for autonomous driving as well as multi-focus and multi-exposure image fusion for satellite imagery. However, it is challenging to analyze the reliability of these CNNs for the image-fusion tasks since no groundtruth is available. This led to the use of a wide variety of model architectures and optimization functions yielding quite different fusion results. Additionally, due to the highly opaque nature of such neural networks, it is difficult to explain the internal mechanics behind its fusion results. To overcome these challenges, we present a novel real-time visualization tool, named FuseVis, with which the end-user can compute per-pixel saliency maps that examine the influence of the input image pixels on each pixel of the fused image. We trained several image fusion-based CNNs on medical image pairs and then using our FuseVis tool we performed case studies on a specific clinical application by interpreting the saliency maps from each of the fusion methods. We specifically visualized the relative influence of each input image on the predictions of the fused image and showed that some of the evaluated image-fusion methods are better suited for the specific clinical application. To the best of our knowledge, currently, there is no approach for visual analysis of neural networks for image fusion. Therefore, this work opens a new research direction to improve the interpretability of deep fusion networks. The FuseVis tool can also be adapted in other deep neural network-based image processing applications to make them interpretable. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

Article
kNN Prototyping Schemes for Embedded Human Activity Recognition with Online Learning
Computers 2020, 9(4), 96; https://doi.org/10.3390/computers9040096 - 03 Dec 2020
Cited by 4 | Viewed by 1093
Abstract
The kNN machine learning method is widely used as a classifier in Human Activity Recognition (HAR) systems. Although the kNN algorithm works similarly both online and in offline mode, the use of all training instances is much more critical online than [...] Read more.
The kNN machine learning method is widely used as a classifier in Human Activity Recognition (HAR) systems. Although the kNN algorithm works similarly both online and in offline mode, the use of all training instances is much more critical online than offline due to time and memory restrictions in the online mode. Some methods propose decreasing the high computational costs of kNN by focusing, e.g., on approximate kNN solutions such as the ones relying on Locality-Sensitive Hashing (LSH). However, embedded kNN implementations also need to address the target device’s memory constraints, especially as the use of online classification needs to cope with those constraints to be practical. This paper discusses online approaches to reduce the number of training instances stored in the kNN search space. To address practical implementations of HAR systems using kNN, this paper presents simple, energy/computationally efficient, and real-time feasible schemes to maintain at runtime a maximum number of training instances stored by kNN. The proposed schemes include policies for substituting the training instances, maintaining the search space to a maximum size. Experiments in the context of HAR datasets show the efficiency of our best schemes. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

Article
Predicting Employee Attrition Using Machine Learning Techniques
Computers 2020, 9(4), 86; https://doi.org/10.3390/computers9040086 - 03 Nov 2020
Cited by 17 | Viewed by 5892
Abstract
There are several areas in which organisations can adopt technologies that will support decision-making: artificial intelligence is one of the most innovative technologies that is widely used to assist organisations in business strategies, organisational aspects and people management. In recent years, attention has [...] Read more.
There are several areas in which organisations can adopt technologies that will support decision-making: artificial intelligence is one of the most innovative technologies that is widely used to assist organisations in business strategies, organisational aspects and people management. In recent years, attention has increasingly been paid to human resources (HR), since worker quality and skills represent a growth factor and a real competitive advantage for companies. After having been introduced to sales and marketing departments, artificial intelligence is also starting to guide employee-related decisions within HR management. The purpose is to support decisions that are based not on subjective aspects but on objective data analysis. The goal of this work is to analyse how objective factors influence employee attrition, in order to identify the main causes that contribute to a worker’s decision to leave a company, and to be able to predict whether a particular employee will leave the company. After the training, the obtained model for the prediction of employees’ attrition is tested on a real dataset provided by IBM analytics, which includes 35 features and about 1500 samples. Results are expressed in terms of classical metrics and the algorithm that produced the best results for the available dataset is the Gaussian Naïve Bayes classifier. It reveals the best recall rate (0.54), since it measures the ability of a classifier to find all the positive instances and achieves an overall false negative rate equal to 4.5% of the total observations. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

Article
Fog Computing for Realizing Smart Neighborhoods in Smart Grids
Computers 2020, 9(3), 76; https://doi.org/10.3390/computers9030076 - 21 Sep 2020
Cited by 6 | Viewed by 1886
Abstract
Cloud Computing provides on-demand computing services like software, networking, storage, analytics, and intelligence over the Internet (“the cloud”). But it is facing challenges because of the explosion of the Internet of Things (IoT) devices and the volume, variety, veracity and velocity of the [...] Read more.
Cloud Computing provides on-demand computing services like software, networking, storage, analytics, and intelligence over the Internet (“the cloud”). But it is facing challenges because of the explosion of the Internet of Things (IoT) devices and the volume, variety, veracity and velocity of the data generated by these devices. There is a need for ultra-low latency, reliable service along with security and privacy. Fog Computing is a promising solution to overcome these challenges. The originality, scope and novelty of this paper is the definition and formulation of the problem of smart neighborhoods in context of smart grids. This is achieved through an extensive literature study, firstly on Fog Computing and its foundation technologies, its applications and the literature review of Fog Computing research in various application domains. Thereafter, we introduce smart grid and community MicroGrid concepts and, their challenges to give the in depth background of the problem and hence, formalize the problem. The smart grid, which ensures reliable, secure, and cost-effective power supply to the smart neighborhoods, effectively needs Fog Computing architecture to achieve its purpose. This paper also identifies, without rigorous analysis, potential solutions to address the problem of smart neighborhoods. The challenges in the integration of Fog Computing and smart grids are also discussed. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

Article
Unmanned Aerial Vehicle Control through Domain-Based Automatic Speech Recognition
Computers 2020, 9(3), 75; https://doi.org/10.3390/computers9030075 - 19 Sep 2020
Cited by 6 | Viewed by 1941
Abstract
Currently, unmanned aerial vehicles, such as drones, are becoming a part of our lives and extend to many areas of society, including the industrialized world. A common alternative for controlling the movements and actions of the drone is through unwired tactile interfaces, for [...] Read more.
Currently, unmanned aerial vehicles, such as drones, are becoming a part of our lives and extend to many areas of society, including the industrialized world. A common alternative for controlling the movements and actions of the drone is through unwired tactile interfaces, for which different remote control devices are used. However, control through such devices is not a natural, human-like communication interface, which sometimes is difficult to master for some users. In this research, we experimented with a domain-based speech recognition architecture to effectively control an unmanned aerial vehicle such as a drone. The drone control was performed in a more natural, human-like way to communicate the instructions. Moreover, we implemented an algorithm for command interpretation using both Spanish and English languages, as well as to control the movements of the drone in a simulated domestic environment. We conducted experiments involving participants giving voice commands to the drone in both languages in order to compare the effectiveness of each, considering the mother tongue of the participants in the experiment. Additionally, different levels of distortion were applied to the voice commands to test the proposed approach when it encountered noisy input signals. The results obtained showed that the unmanned aerial vehicle was capable of interpreting user voice instructions. Speech-to-action recognition improved for both languages with phoneme matching in comparison to only using the cloud-based algorithm without domain-based instructions. Using raw audio inputs, the cloud-based approach achieves 74.81% and 97.04% accuracy for English and Spanish instructions, respectively. However, with our phoneme matching approach the results are improved, yielding 93.33% accuracy for English and 100.00% accuracy for Spanish. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

Article
Toward a Sustainable Cybersecurity Ecosystem
Computers 2020, 9(3), 74; https://doi.org/10.3390/computers9030074 - 17 Sep 2020
Cited by 10 | Viewed by 3789
Abstract
Cybersecurity issues constitute a key concern of today’s technology-based economies. Cybersecurity has become a core need for providing a sustainable and safe society to online users in cyberspace. Considering the rapid increase of technological implementations, it has turned into a global necessity in [...] Read more.
Cybersecurity issues constitute a key concern of today’s technology-based economies. Cybersecurity has become a core need for providing a sustainable and safe society to online users in cyberspace. Considering the rapid increase of technological implementations, it has turned into a global necessity in the attempt to adapt security countermeasures, whether direct or indirect, and prevent systems from cyberthreats. Identifying, characterizing, and classifying such threats and their sources is required for a sustainable cyber-ecosystem. This paper focuses on the cybersecurity of smart grids and the emerging trends such as using blockchain in the Internet of Things (IoT). The cybersecurity of emerging technologies such as smart cities is also discussed. In addition, associated solutions based on artificial intelligence and machine learning frameworks to prevent cyber-risks are also discussed. Our review will serve as a reference for policy-makers from the industry, government, and the cybersecurity research community. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

Article
Privacy-Preserving Passive DNS
Computers 2020, 9(3), 64; https://doi.org/10.3390/computers9030064 - 12 Aug 2020
Cited by 8 | Viewed by 2842
Abstract
The Domain Name System (DNS) was created to resolve the IP addresses of web servers to easily remembered names. When it was initially created, security was not a major concern; nowadays, this lack of inherent security and trust has exposed the global DNS [...] Read more.
The Domain Name System (DNS) was created to resolve the IP addresses of web servers to easily remembered names. When it was initially created, security was not a major concern; nowadays, this lack of inherent security and trust has exposed the global DNS infrastructure to malicious actors. The passive DNS data collection process creates a database containing various DNS data elements, some of which are personal and need to be protected to preserve the privacy of the end users. To this end, we propose the use of distributed ledger technology. We use Hyperledger Fabric to create a permissioned blockchain, which only authorized entities can access. The proposed solution supports queries for storing and retrieving data from the blockchain ledger, allowing the use of the passive DNS database for further analysis, e.g., for the identification of malicious domain names. Additionally, it effectively protects the DNS personal data from unauthorized entities, including the administrators that can act as potential malicious insiders, and allows only the data owners to perform queries over these data. We evaluated our proposed solution by creating a proof-of-concept experimental setup that passively collects DNS data from a network and then uses the distributed ledger technology to store the data in an immutable ledger, thus providing a full historical overview of all the records. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

Article
Possibilities of Electromagnetic Penetration of Displays of Multifunction Devices
Computers 2020, 9(3), 62; https://doi.org/10.3390/computers9030062 - 08 Aug 2020
Cited by 1 | Viewed by 2559
Abstract
A protection of information against electromagnetic penetration is very often considered in the aspect of the possibility of obtaining data contained in printed documents or displayed on screen monitors. However, many printing devices are equipped with screens based on LED technology or liquid [...] Read more.
A protection of information against electromagnetic penetration is very often considered in the aspect of the possibility of obtaining data contained in printed documents or displayed on screen monitors. However, many printing devices are equipped with screens based on LED technology or liquid crystal displays. Options enabling the selection of parameters of the printed document, technical settings of the device (e.g., screen activity time) are the most frequently displayed information. For more extensive displays, more detailed information appears, which may contain data that are not always irrelevant to third parties. Such data can be: names of printed documents (or documents registered and available on the internal media), service password access, user names or regular printer user activity. The printer display can be treated as a source of revealing emissions, like a typical screen monitor. The emissions correlated with the displayed data may allow us to obtain the abovementioned information. The article includes analyses of various types of computer printer displays. The tests results of the existing threat are presented in the form of reconstructed images that show the possibility of reading the text data contained in them. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

Article
Increasing Innovative Working Behaviour of Information Technology Employees in Vietnam by Knowledge Management Approach
Computers 2020, 9(3), 61; https://doi.org/10.3390/computers9030061 - 01 Aug 2020
Cited by 6 | Viewed by 2758
Abstract
Today, Knowledge Management (KM) is becoming a popular approach for improving organizational innovation, but whether encouraging knowledge sharing will lead to a better innovative working behaviour of employees is still a question. This study aims to identify the factors of KM affecting the [...] Read more.
Today, Knowledge Management (KM) is becoming a popular approach for improving organizational innovation, but whether encouraging knowledge sharing will lead to a better innovative working behaviour of employees is still a question. This study aims to identify the factors of KM affecting the innovative working behaviour of Information Technology (IT) employees in Vietnam. The research model involves three elements: attitude, subjective norm and perceived behavioural control affecting knowledge sharing, and then, on innovative working behaviour. The research method is the quantitative method. The survey was conducted with 202 samples via the five-scale questionnaire. The analysis results show that knowledge sharing has a positive impact on the innovative working behaviour of IT employees in Vietnam. Besides, attitude and perceived behavioural control are confirmed to have a strong positive effect on knowledge sharing, but the subjective norm has no significant impact on knowledge sharing. Based on this result, recommendations to promote knowledge sharing and the innovative work behaviour of IT employees in Vietnam are made. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

Article
Predicting LoRaWAN Behavior: How Machine Learning Can Help
Computers 2020, 9(3), 60; https://doi.org/10.3390/computers9030060 - 31 Jul 2020
Cited by 3 | Viewed by 1788
Abstract
Large scale deployments of Internet of Things (IoT) networks are becoming reality. From a technology perspective, a lot of information related to device parameters, channel states, network and application data are stored in databases and can be used for an extensive analysis to [...] Read more.
Large scale deployments of Internet of Things (IoT) networks are becoming reality. From a technology perspective, a lot of information related to device parameters, channel states, network and application data are stored in databases and can be used for an extensive analysis to improve the functionality of IoT systems in terms of network performance and user services. LoRaWAN (Long Range Wide Area Network) is one of the emerging IoT technologies, with a simple protocol based on LoRa modulation. In this work, we discuss how machine learning approaches can be used to improve network performance (and if and how they can help). To this aim, we describe a methodology to process LoRaWAN packets and apply a machine learning pipeline to: (i) perform device profiling, and (ii) predict the inter-arrival of IoT packets. This latter analysis is very related to the channel and network usage and can be leveraged in the future for system performance enhancements. Our analysis mainly focuses on the use of k-means, Long Short-Term Memory Neural Networks and Decision Trees. We test these approaches on a real large-scale LoRaWAN network where the overall captured traffic is stored in a proprietary database. Our study shows how profiling techniques enable a machine learning prediction algorithm even when training is not possible because of high error rates perceived by some devices. In this challenging case, the prediction of the inter-arrival time of packets has an error of about 3.5% for 77% of real sequence cases. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

Article
An Adversarial Approach for Intrusion Detection Systems Using Jacobian Saliency Map Attacks (JSMA) Algorithm
Computers 2020, 9(3), 58; https://doi.org/10.3390/computers9030058 - 20 Jul 2020
Cited by 2 | Viewed by 2085
Abstract
In today’s digital world, the information systems are revolutionizing the way we connect. As the people are trying to adopt and integrate intelligent systems into daily lives, the risks around cyberattacks on user-specific information have significantly grown. To ensure safe communication, the Intrusion [...] Read more.
In today’s digital world, the information systems are revolutionizing the way we connect. As the people are trying to adopt and integrate intelligent systems into daily lives, the risks around cyberattacks on user-specific information have significantly grown. To ensure safe communication, the Intrusion Detection Systems (IDS) were developed often by using machine learning (ML) algorithms that have the unique ability to detect malware against network security violations. Recently, it was reported that the IDS are prone to carefully crafted perturbations known as adversaries. With the aim to understand the impact of such attacks, in this paper, we have proposed a novel random neural network-based adversarial intrusion detection system (RNN-ADV). The NSL-KDD dataset is utilized for training. For adversarial attack crafting, the Jacobian Saliency Map Attack (JSMA) algorithm is used, which identifies the feature which can cause maximum change to the benign samples with minimum added perturbation. To check the effectiveness of the proposed adversarial scheme, the results are compared with a deep neural network which indicates that RNN-ADV performs better in terms of accuracy, precision, recall, F1 score and training epochs. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

Article
ERF: An Empirical Recommender Framework for Ascertaining Appropriate Learning Materials from Stack Overflow Discussions
Computers 2020, 9(3), 57; https://doi.org/10.3390/computers9030057 - 20 Jul 2020
Viewed by 1701
Abstract
Computer programmers require various instructive information during coding and development. Such information is dispersed in different sources like language documentation, wikis, and forums. As an information exchange platform, programmers broadly utilize Stack Overflow, a Web-based Question Answering site. In this paper, we propose [...] Read more.
Computer programmers require various instructive information during coding and development. Such information is dispersed in different sources like language documentation, wikis, and forums. As an information exchange platform, programmers broadly utilize Stack Overflow, a Web-based Question Answering site. In this paper, we propose a recommender system which uses a supervised machine learning approach to investigate Stack Overflow posts to present instructive information for the programmers. This might be helpful for the programmers to solve programming problems that they confront with in their daily life. We analyzed posts related to two most popular programming languages—Python and PHP. We performed a few trials and found that the supervised approach could effectively manifold valuable information from our corpus. We validated the performance of our system from human perception which showed an accuracy of 71%. We also presented an interactive interface for the users that satisfied the users’ query with the matching sentences with most instructive information. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

Review

Jump to: Research

Review
Cloud-Based Business Process Security Risk Management: A Systematic Review, Taxonomy, and Future Directions
Computers 2021, 10(12), 160; https://doi.org/10.3390/computers10120160 - 26 Nov 2021
Cited by 1 | Viewed by 1123
Abstract
Despite the attractive benefits of cloud-based business processes, security issues, cloud attacks, and privacy are some of the challenges that prevent many organizations from using this technology. This review seeks to know the level of integration of security risk management process at each [...] Read more.
Despite the attractive benefits of cloud-based business processes, security issues, cloud attacks, and privacy are some of the challenges that prevent many organizations from using this technology. This review seeks to know the level of integration of security risk management process at each phase of the Business Process Life Cycle (BPLC) for securing cloud-based business processes; usage of an existing risk analysis technique as the basis of risk assessment model, usage of security risk standard, and the classification of cloud security risks in a cloud-based business process. In light of these objectives, this study presented an exhaustive review of the current state-of-the-art methodology for managing cloud-based business process security risk. Eleven electronic databases (ACM, IEEE, Science Direct, Google Scholar, Springer, Wiley, Taylor and Francis, IEEE cloud computing Conference, ICSE conference, COMPSAC conference, ICCSA conference, Computer Standards and Interfaces Journal) were used for the selected publications. A total of 1243 articles were found. After using the selection criteria, 93 articles were selected, while 17 articles were found eligible for in-depth evaluation. For the results of the business process lifecycle evaluation, 17% of the approaches integrated security risk management into one of the phases of the business process, while others did not. For the influence of the results of the domain assessment of risk management, three key indicators (domain applicability, use of existing risk management techniques, and integration of risk standards) were used to substantiate our findings. The evaluation result of domain applicability showed that 53% of the approaches had been testing run in real-time, thereby making these works reusable. The result of the usage of existing risk analysis showed that 52.9% of the authors implemented their work using existing risk analysis techniques while 29.4% of the authors partially integrated security risk standards into their work. Based on these findings and results, security risk management, the usage of existing security risk management techniques, and security risk standards should be integrated with business process phases to protect against security issues in cloud services. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

Review
A Brief Review of Some Interesting Mars Rover Image Enhancement Projects
Computers 2021, 10(9), 111; https://doi.org/10.3390/computers10090111 - 08 Sep 2021
Viewed by 757
Abstract
The Curiosity rover has landed on Mars since 2012. One of the instruments onboard the rover is a pair of multispectral cameras known as Mastcams, which act as eyes of the rover. In this paper, we summarize our recent studies on some interesting [...] Read more.
The Curiosity rover has landed on Mars since 2012. One of the instruments onboard the rover is a pair of multispectral cameras known as Mastcams, which act as eyes of the rover. In this paper, we summarize our recent studies on some interesting image processing projects for Mastcams. In particular, we will address perceptually lossless compression of Mastcam images, debayering and resolution enhancement of Mastcam images, high resolution stereo and disparity map generation using fused Mastcam images, and improved performance of anomaly detection and pixel clustering using combined left and right Mastcam images. The main goal of this review paper is to raise public awareness about these interesting Mastcam projects and also stimulate interests in the research community to further develop new algorithms for those applications. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

Review
Toward Management of Uncertainty in Self-Adaptive Software Systems: IoT Case Study
Computers 2021, 10(3), 27; https://doi.org/10.3390/computers10030027 - 27 Feb 2021
Cited by 3 | Viewed by 1230
Abstract
Adaptivity is the ability of the system to change its behavior whenever it does not achieve the system requirements. Self-adaptive software systems (SASS) are considered a milestone in software development in many modern complex scientific and engineering fields. Employing self-adaptation into a system [...] Read more.
Adaptivity is the ability of the system to change its behavior whenever it does not achieve the system requirements. Self-adaptive software systems (SASS) are considered a milestone in software development in many modern complex scientific and engineering fields. Employing self-adaptation into a system can accomplish better functionality or performance; however, it may lead to unexpected system behavior and consequently to uncertainty. The uncertainty that results from using SASS needs to be tackled from different perspectives. The Internet of Things (IoT) that utilizes the attributes of SASS presents great development opportunities. Because IoT is a relatively new domain, it carries a high level of uncertainty. The goal of this work is to highlight more details about self-adaptivity in software systems, describe all possible sources of uncertainty, and illustrate its effect on the ability of the system to fulfill its objectives. We provide a survey of state-of-the-art approaches coping with uncertainty in SASS and discuss their performance. We classify the different sources of uncertainty based on their location and nature in SASS. Moreover, we present IoT as a case study to define uncertainty at different layers of the IoT stack. We use this case study to identify the sources of uncertainty, categorize the sources according to IoT stack layers, demonstrate the effect of uncertainty on the ability of the system to fulfill its objectives, and discuss the state-of-the-art approaches to mitigate the sources of uncertainty. We conclude with a set of challenges that provide a guide for future study. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Graphical abstract

Review
A Review of Agent-Based Programming for Multi-Agent Systems
Computers 2021, 10(2), 16; https://doi.org/10.3390/computers10020016 - 27 Jan 2021
Cited by 13 | Viewed by 2706
Abstract
Intelligent and autonomous agents is a subarea of symbolic artificial intelligence where these agents decide, either reactively or proactively, upon a course of action by reasoning about the information that is available about the world (including the environment, the agent itself, and other [...] Read more.
Intelligent and autonomous agents is a subarea of symbolic artificial intelligence where these agents decide, either reactively or proactively, upon a course of action by reasoning about the information that is available about the world (including the environment, the agent itself, and other agents). It encompasses a multitude of techniques, such as negotiation protocols, agent simulation, multi-agent argumentation, multi-agent planning, and many others. In this paper, we focus on agent programming and we provide a systematic review of the literature in agent-based programming for multi-agent systems. In particular, we discuss both veteran (still maintained) and novel agent programming languages, their extensions, work on comparing some of these languages, and applications found in the literature that make use of agent programming. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

Back to TopTop