Next Issue
Volume 10, October
Previous Issue
Volume 10, August
 
 

Computers, Volume 10, Issue 9 (September 2021) – 14 articles

Cover Story (view full-size image): In general, the teaching of software engineering is difficult for computer science students since the concepts and content are more related to engineering than to computer science. It is for this reason that students have difficulties with their learning. This article presents a tool that aims to help students to improve their understanding. The development is based on two ideas. On the one hand, the application is mobile in order to make it easy to use it anytime, anywhere. On the other hand, the use of gamification has the aim of motivating students by competing with their peers, as well as having learning paths adapted to the activity of each student. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
15 pages, 1364 KiB  
Article
360°-Based Virtual Field Trips to Waterworks in Higher Education
by Mario Wolf, Florian Wehking, Michael Montag and Heinrich Söbke
Computers 2021, 10(9), 118; https://doi.org/10.3390/computers10090118 - 18 Sep 2021
Cited by 7 | Viewed by 2758
Abstract
360° models are a form of virtual reality (VR) that allow the viewer to view and explore a photorealistic object from multiple locations within the model. Hence, 360° models are an option to perform virtual field trips (VFT) independent of time and location. [...] Read more.
360° models are a form of virtual reality (VR) that allow the viewer to view and explore a photorealistic object from multiple locations within the model. Hence, 360° models are an option to perform virtual field trips (VFT) independent of time and location. Thanks to recent technical progress, 360° models are creatable with little effort. Due to their characteristics of visualization and explorability, 360° models appear as excellent learning tools, especially when additional didactic features, such as annotations, are used. The subject of this explorative field study is a 360° model of a waterworks that has been annotated for learning purposes. Data are collected from a total of 55 learners in four cohorts from study programs in environmental engineering and urban studies using a questionnaire that included standardized measurement instruments on motivation, emotion, and usability. Furthermore, the eight learners of cohort 1 are surveyed using semi-structured interviews on learning, operation and features of the 360° model. Overall, a very positive view on learning suitability of 360° models in VFTs is revealed. In addition, further potential for development of the 360° model could be identified. The results indicate that VTFs based on 360° models might be valuable learning tools, because of their applicability without great effort on the part of either the lecturers or the students. VFTs based on 360° models might serve as a supplement to conventional learning activities or in self-directed learning activities. Full article
(This article belongs to the Special Issue Interactive Technology and Smart Education)
Show Figures

Figure 1

18 pages, 4622 KiB  
Article
Feature Focus: Towards Explainable and Transparent Deep Face Morphing Attack Detectors
by Clemens Seibold, Anna Hilsmann and Peter Eisert
Computers 2021, 10(9), 117; https://doi.org/10.3390/computers10090117 - 18 Sep 2021
Cited by 3 | Viewed by 2710
Abstract
Detecting morphed face images has become an important task to maintain the trust in automated verification systems based on facial images, e.g., at automated border control gates. Deep Neural Network (DNN)-based detectors have shown remarkable results, but without further investigations their decision-making process [...] Read more.
Detecting morphed face images has become an important task to maintain the trust in automated verification systems based on facial images, e.g., at automated border control gates. Deep Neural Network (DNN)-based detectors have shown remarkable results, but without further investigations their decision-making process is not transparent. In contrast to approaches based on hand-crafted features, DNNs have to be analyzed in complex experiments to know which characteristics or structures are generally used to distinguish between morphed and genuine face images or considered for an individual morphed face image. In this paper, we present Feature Focus, a new transparent face morphing detector based on a modified VGG-A architecture and an additional feature shaping loss function, as well as Focused Layer-wise Relevance Propagation (FLRP), an extension of LRP. FLRP in combination with the Feature Focus detector forms a reliable and accurate explainability component. We study the advantages of the new detector compared to other DNN-based approaches and evaluate LRP and FLRP regarding their suitability for highlighting traces of image manipulation from face morphing. To this end, we use partial morphs which contain morphing artifacts in predefined areas only and analyze how much of the overall relevance each method assigns to these areas. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence for Biometrics 2021)
Show Figures

Figure 1

22 pages, 7682 KiB  
Article
Smart Interconnected Infrastructures for Security and Protection: The DESMOS Project
by Michail Feidakis, Christos Chatzigeorgiou, Christina Karamperi, Lazaros Giannakos, Vasileios-Rafail Xefteris, Dimos Ntioudis, Athina Tsanousa, Dimitrios G. Kogias, Charalampos Patrikakis, Georgios Meditskos, Georgios Gorgogetas, Stefanos Vrochidis and Ioannis Kompatsiaris
Computers 2021, 10(9), 116; https://doi.org/10.3390/computers10090116 - 16 Sep 2021
Cited by 1 | Viewed by 2363
Abstract
This paper presents “DESMOS”, a novel ecosystem for the interconnection of smart infrastructures, mobile and wearable devices, and applications, to provide a secure environment for visitors and tourists. The presented solution brings together state-of-the-art IoT technologies, crowdsourcing, localization through BLE, and semantic reasoning, [...] Read more.
This paper presents “DESMOS”, a novel ecosystem for the interconnection of smart infrastructures, mobile and wearable devices, and applications, to provide a secure environment for visitors and tourists. The presented solution brings together state-of-the-art IoT technologies, crowdsourcing, localization through BLE, and semantic reasoning, following a privacy and security-by-design approach to ensure data anonymization and protection. Despite the COVID-19 pandemic, the solution was tested, validated, and evaluated via two pilots in almost real settings—involving a fewer density of people than planned—in Trikala, Thessaly, Greece. The results and findings support that the presented solutions can provide successful emergency reporting, crowdsourcing, and localization via BLE. However, these results also prompt for improvements in the user interface expressiveness, the application’s effectiveness and accuracy, as well as evaluation in real, overcrowded conditions. The main contribution of this paper is to report on the progress made and to showcase how all these technological solutions can be integrated and applied in realistic and practical scenarios, for the safety and privacy of visitors and tourists. Full article
(This article belongs to the Special Issue Integration of Cloud Computing and IoT)
Show Figures

Figure 1

26 pages, 1814 KiB  
Article
An Experimental Study on Centrality Measures Using Clustering
by Péter Marjai, Bence Szabari and Attila Kiss
Computers 2021, 10(9), 115; https://doi.org/10.3390/computers10090115 - 15 Sep 2021
Viewed by 2133
Abstract
Graphs can be found in almost every part of modern life: social networks, road networks, biology, and so on. Finding the most important node is a vital issue. Up to this date, numerous centrality measures were proposed to address this problem; however, each [...] Read more.
Graphs can be found in almost every part of modern life: social networks, road networks, biology, and so on. Finding the most important node is a vital issue. Up to this date, numerous centrality measures were proposed to address this problem; however, each has its drawbacks, for example, not scaling well on large graphs. In this paper, we investigate the ranking efficiency and the execution time of a method that uses graph clustering to reduce the time that is needed to define the vital nodes. With graph clustering, the neighboring nodes representing communities are selected into groups. These groups are then used to create subgraphs from the original graph, which are smaller and easier to measure. To classify the efficiency, we investigate different aspects of accuracy. First, we compare the top 10 nodes that resulted from the original closeness and betweenness methods with the nodes that resulted from the use of this method. Then, we examine what percentage of the first n nodes are equal between the original and the clustered ranking. Centrality measures also assign a value to each node, so lastly we investigate the sum of the centrality values of the top n nodes. We also evaluate the runtime of the investigated method, and the original measures in plain implementation, with the use of a graph database. Based on our experiments, our method greatly reduces the time consumption of the investigated centrality measures, especially in the case of the Louvain algorithm. The first experiment regarding the accuracy yielded that the examination of the top 10 nodes is not good enough to properly evaluate the precision. The second experiment showed that the investigated algorithm in par with the Paris algorithm has around 45–60% accuracy in the case of betweenness centrality. On the other hand, the last experiment resulted that the investigated method has great accuracy in the case of closeness centrality especially in the case of Louvain clustering algorithm. Full article
Show Figures

Figure 1

20 pages, 933 KiB  
Article
Privacy Preservation Instruments Influencing the Trustworthiness of e-Government Services
by Hilal AlAbdali, Mohammed AlBadawi, Mohamed Sarrab and Abdullah AlHamadani
Computers 2021, 10(9), 114; https://doi.org/10.3390/computers10090114 - 13 Sep 2021
Cited by 3 | Viewed by 2445
Abstract
Trust is one of the most critical factors that determine willingness to use e-government services. Despite its significance, most previous studies investigated the factors that lead to trusting such services in theoretical aspects without examining the technical solutions. Therefore, more effort is needed [...] Read more.
Trust is one of the most critical factors that determine willingness to use e-government services. Despite its significance, most previous studies investigated the factors that lead to trusting such services in theoretical aspects without examining the technical solutions. Therefore, more effort is needed to preserve privacy in the current debate on trust within integrated e-government services. Specifically, this study aims to develop a model that examines instruments extracted from privacy by design principles that could protect personal information from misuse by the e-government employee, influencing the trust to use e-government services. This study was conducted with 420 respondents from Oman who were familiar with using e-government services. The results show that different factors influencing service trust, including the need for privacy lifecycle protection, privacy controls, impact assessments, and personal information monitors. The findings reveal that the impeding factors of trust are organizational barriers and lack of support. Finally, this study assists e-government initiatives and decision-makers to increase the use of services by facilitating privacy preservation instruments in the design of e-government services. Full article
(This article belongs to the Special Issue Sensors and Smart Cities 2023)
Show Figures

Figure 1

25 pages, 7059 KiB  
Article
Evaluating Impact of Race in Facial Recognition across Machine Learning and Deep Learning Algorithms
by James Coe and Mustafa Atay
Computers 2021, 10(9), 113; https://doi.org/10.3390/computers10090113 - 10 Sep 2021
Cited by 12 | Viewed by 5385
Abstract
The research aims to evaluate the impact of race in facial recognition across two types of algorithms. We give a general insight into facial recognition and discuss four problems related to facial recognition. We review our system design, development, and architectures and give [...] Read more.
The research aims to evaluate the impact of race in facial recognition across two types of algorithms. We give a general insight into facial recognition and discuss four problems related to facial recognition. We review our system design, development, and architectures and give an in-depth evaluation plan for each type of algorithm, dataset, and a look into the software and its architecture. We thoroughly explain the results and findings of our experimentation and provide analysis for the machine learning algorithms and deep learning algorithms. Concluding the investigation, we compare the results of two kinds of algorithms and compare their accuracy, metrics, miss rates, and performances to observe which algorithms mitigate racial bias the most. We evaluate racial bias across five machine learning algorithms and three deep learning algorithms using racially imbalanced and balanced datasets. We evaluate and compare the accuracy and miss rates between all tested algorithms and report that SVC is the superior machine learning algorithm and VGG16 is the best deep learning algorithm based on our experimental study. Our findings conclude the algorithm that mitigates the bias the most is VGG16, and all our deep learning algorithms outperformed their machine learning counterparts. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence for Biometrics 2021)
Show Figures

Figure 1

19 pages, 1812 KiB  
Article
Classification of Contaminated Insulators Using k-Nearest Neighbors Based on Computer Vision
by Marcelo Picolotto Corso, Fabio Luis Perez, Stéfano Frizzo Stefenon, Kin-Choong Yow, Raúl García Ovejero and Valderi Reis Quietinho Leithardt
Computers 2021, 10(9), 112; https://doi.org/10.3390/computers10090112 - 09 Sep 2021
Cited by 40 | Viewed by 2382
Abstract
Contamination on insulators may increase the surface conductivity of the insulator, and as a consequence, electrical discharges occur more frequently, which can lead to interruptions in a power supply. To maintain reliability in an electrical distribution power system, components that have lost their [...] Read more.
Contamination on insulators may increase the surface conductivity of the insulator, and as a consequence, electrical discharges occur more frequently, which can lead to interruptions in a power supply. To maintain reliability in an electrical distribution power system, components that have lost their insulating properties must be replaced. Identifying the components that need maintenance is a difficult task as there are several levels of contamination that are hard to notice during inspections. To improve the quality of inspections, this paper proposes using k-nearest neighbors (k-NN) to classify the levels of insulator contamination based on images of insulators at various levels of contamination simulated in the laboratory. Computer vision features such as mean, variance, asymmetry, kurtosis, energy, and entropy are used for training the k-NN. To assess the robustness of the proposed approach, a statistical analysis and a comparative assessment with well-consolidated algorithms such as decision tree, ensemble subspace, and support vector machine models are presented. The k-NN showed up to 85.17% accuracy using the k-fold cross-validation method, with an average accuracy higher than 82% for the multi-classification of contamination of insulators, being superior to the compared models. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Graphical abstract

18 pages, 7534 KiB  
Review
A Brief Review of Some Interesting Mars Rover Image Enhancement Projects
by Chiman Kwan
Computers 2021, 10(9), 111; https://doi.org/10.3390/computers10090111 - 08 Sep 2021
Cited by 2 | Viewed by 2872
Abstract
The Curiosity rover has landed on Mars since 2012. One of the instruments onboard the rover is a pair of multispectral cameras known as Mastcams, which act as eyes of the rover. In this paper, we summarize our recent studies on some interesting [...] Read more.
The Curiosity rover has landed on Mars since 2012. One of the instruments onboard the rover is a pair of multispectral cameras known as Mastcams, which act as eyes of the rover. In this paper, we summarize our recent studies on some interesting image processing projects for Mastcams. In particular, we will address perceptually lossless compression of Mastcam images, debayering and resolution enhancement of Mastcam images, high resolution stereo and disparity map generation using fused Mastcam images, and improved performance of anomaly detection and pixel clustering using combined left and right Mastcam images. The main goal of this review paper is to raise public awareness about these interesting Mastcam projects and also stimulate interests in the research community to further develop new algorithms for those applications. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

14 pages, 698 KiB  
Article
Perceptions about the Future of Integrating Emerging Technologies into Higher Education—The Case of Robotics with Artificial Intelligence
by Janika Leoste, Larissa Jõgi, Tiia Õun, Luis Pastor, José San Martín López and Indrek Grauberg
Computers 2021, 10(9), 110; https://doi.org/10.3390/computers10090110 - 08 Sep 2021
Cited by 15 | Viewed by 5935
Abstract
Emerging technologies (ETs) will most likely have a strong impact on education (starting with higher education), just like they have already had in so many economic and social areas. This paper is based on the results obtained in the project “My Future Colleague [...] Read more.
Emerging technologies (ETs) will most likely have a strong impact on education (starting with higher education), just like they have already had in so many economic and social areas. This paper is based on the results obtained in the project “My Future Colleague Robot”, an initiative that aimed to improve the competence of university teaching staff regarding the introduction of ETs in teaching practices at university level. In this paper, we identified the strengths and weaknesses, opportunities, and threats that are related to the adoption in higher education of the combination of two ETs: robotics together with artificial intelligence (AI). Additionally, we analyzed the perceptions of university-level teaching staff about the potential of introducing ETs in education. The empirical data presented here were collected using written essays from 18 university teachers and students. Deductive and inductive approaches with thematic analysis were used for the data analysis. The findings support the idea that previous ET-related experience can support positive attitudes and the implementations of ETs in university teaching; in this study, university teachers had optimistic expectations towards ETs, accepting them as part of teaching practice development, while discussion about the negative effects of ETs was negligible. Full article
(This article belongs to the Special Issue Interactive Technology and Smart Education)
Show Figures

Figure 1

12 pages, 282 KiB  
Article
Approximated Mixed-Integer Convex Model for Phase Balancing in Three-Phase Electric Networks
by Oscar Danilo Montoya, Luis Fernando Grisales-Noreña and Edwin Rivas-Trujillo
Computers 2021, 10(9), 109; https://doi.org/10.3390/computers10090109 - 31 Aug 2021
Cited by 6 | Viewed by 1620
Abstract
With this study, we address the optimal phase balancing problem in three-phase networks with asymmetric loads in reference to a mixed-integer quadratic convex (MIQC) model. The objective function considers the minimization of the sum of the square currents through the distribution lines multiplied [...] Read more.
With this study, we address the optimal phase balancing problem in three-phase networks with asymmetric loads in reference to a mixed-integer quadratic convex (MIQC) model. The objective function considers the minimization of the sum of the square currents through the distribution lines multiplied by the average resistance value of the line. As constraints are considered for the active and reactive power redistribution in all the nodes considering a 3×3 binary decision variable having six possible combinations, the branch and nodal current relations are related to an extended upper-triangular matrix. The solution offered by the proposed MIQC model is evaluated using the triangular-based three-phase power flow method in order to determine the final steady state of the network with respect to the number of power loss upon the application of the phase balancing approach. The numerical results in three radial test feeders composed of 8, 15, and 25 nodes demonstrated the effectiveness of the proposed MIQC model as compared to metaheuristic optimizers such as the genetic algorithm, black hole optimizer, sine–cosine algorithm, and vortex search algorithm. All simulations were carried out in MATLAB 2020a using the CVX tool and the Gurobi solver. Full article
(This article belongs to the Special Issue Computing, Electrical and Industrial Systems 2021)
Show Figures

Figure 1

20 pages, 863 KiB  
Article
More Plausible Models of Body Ownership Could Benefit Virtual Reality Applications
by Moritz Schubert and Dominik Endres
Computers 2021, 10(9), 108; https://doi.org/10.3390/computers10090108 - 26 Aug 2021
Cited by 2 | Viewed by 2420
Abstract
Embodiment of an avatar is important in many seated VR applications. We investigate a Bayesian Causal Inference model of body ownership. According to the model, when available sensory signals (e.g., tactile and visual signals) are attributed to a single object (e.g., a rubber [...] Read more.
Embodiment of an avatar is important in many seated VR applications. We investigate a Bayesian Causal Inference model of body ownership. According to the model, when available sensory signals (e.g., tactile and visual signals) are attributed to a single object (e.g., a rubber hand), the object is incorporated into the body. The model uses normal distributions with astronomically large standard deviations as priors for the sensory input. We criticize the model for its choice of parameter values and hold that a model trying to describe human cognition should employ parameter values that are psychologically plausible, i.e., in line with human expectations. By systematically varying the values of all relevant parameters we arrive at the conclusion that such quantitative modifications of the model cannot overcome the model’s dependence on implausibly large standard deviations. We posit that the model needs a qualitative revision through the inclusion of additional sensory modalities. Full article
(This article belongs to the Special Issue Advances in Seated Virtual Reality)
Show Figures

Figure 1

26 pages, 536 KiB  
Article
Dynamic Privacy-Preserving Recommendations on Academic Graph Data
by Erasmo Purificato, Sabine Wehnert and Ernesto William De Luca
Computers 2021, 10(9), 107; https://doi.org/10.3390/computers10090107 - 25 Aug 2021
Cited by 6 | Viewed by 2781
Abstract
In the age of digital information, where the internet and social networks, as well as personalised systems, have become an integral part of everyone’s life, it is often challenging to be aware of the amount of data produced daily and, unfortunately, of the [...] Read more.
In the age of digital information, where the internet and social networks, as well as personalised systems, have become an integral part of everyone’s life, it is often challenging to be aware of the amount of data produced daily and, unfortunately, of the potential risks caused by the indiscriminate sharing of personal data. Recently, attention to privacy has grown thanks to the introduction of specific regulations such as the European GDPR. In some fields, including recommender systems, this has inevitably led to a decrease in the amount of usable data, and, occasionally, to significant degradation in performance mainly due to information no longer being attributable to specific individuals. In this article, we present a dynamic privacy-preserving approach for recommendations in an academic context. We aim to implement a personalised system capable of protecting personal data while at the same time allowing sensible and meaningful use of the available data. The proposed approach introduces several pseudonymisation procedures based on the design goals described by the European Union Agency for Cybersecurity in their guidelines, in order to dynamically transform entities (e.g., persons) and attributes (e.g., authored papers and research interests) in such a way that any user processing the data are not able to identify individuals. We present a case study using data from researchers of the Georg Eckert Institute for International Textbook Research (Brunswick, Germany). Building a knowledge graph and exploiting a Neo4j database for data management, we first generate several pseudoN-graphs, being graphs with different rates of pseudonymised persons. Then, we evaluate our approach by leveraging the graph embedding algorithm node2vec to produce recommendations through node relatedness. The recommendations provided by the graphs in different privacy-preserving scenarios are compared with those provided by the fully non-pseudonymised graph, considered as the baseline of our evaluation. The experimental results show that, despite the structural modifications to the knowledge graph structure due to the de-identification processes, applying the approach proposed in this article allows for preserving significant performance values in terms of precision. Full article
(This article belongs to the Special Issue Artificial Intelligence for Digital Humanities (AI4DH))
Show Figures

Figure 1

34 pages, 11895 KiB  
Article
Development of an Educational Application for Software Engineering Learning
by Antonio Sarasa-Cabezuelo and Covadonga Rodrigo
Computers 2021, 10(9), 106; https://doi.org/10.3390/computers10090106 - 25 Aug 2021
Cited by 4 | Viewed by 2113
Abstract
Software engineering is a complicated subject for computer engineering students since the explained knowledge and necessary competencies are more related to engineering as a general knowledge area than to computer science. This article describes a software engineering learning application that aims to provide [...] Read more.
Software engineering is a complicated subject for computer engineering students since the explained knowledge and necessary competencies are more related to engineering as a general knowledge area than to computer science. This article describes a software engineering learning application that aims to provide a solution to this problem. Two ideas are used for this. On the one hand, to facilitate its use it has been implemented as an Android app (in this way it can be used anywhere and at any time). In addition, and on the other hand, a gamification system has been implemented with different learning paths that adapt to the learning styles of each student. In this way, the student is motivated by competing with other classmates, and on the other hand, the application adapts to the way of learning that each one has. Full article
(This article belongs to the Special Issue Present and Future of E-Learning Technologies)
Show Figures

Figure 1

20 pages, 1855 KiB  
Article
Fine-Grained Cross-Modal Retrieval for Cultural Items with Focal Attention and Hierarchical Encodings
by Shurong Sheng, Katrien Laenen, Luc Van Gool and Marie-Francine Moens
Computers 2021, 10(9), 105; https://doi.org/10.3390/computers10090105 - 25 Aug 2021
Cited by 1 | Viewed by 1978
Abstract
In this paper, we target the tasks of fine-grained image–text alignment and cross-modal retrieval in the cultural heritage domain as follows: (1) given an image fragment of an artwork, we retrieve the noun phrases that describe it; (2) given a noun phrase artifact [...] Read more.
In this paper, we target the tasks of fine-grained image–text alignment and cross-modal retrieval in the cultural heritage domain as follows: (1) given an image fragment of an artwork, we retrieve the noun phrases that describe it; (2) given a noun phrase artifact attribute, we retrieve the corresponding image fragment it specifies. To this end, we propose a weakly supervised alignment model where the correspondence between the input training visual and textual fragments is not known but their corresponding units that refer to the same artwork are treated as a positive pair. The model exploits the latent alignment between fragments across modalities using attention mechanisms by first projecting them into a shared common semantic space; the model is then trained by increasing the image–text similarity of the positive pair in the common space. During this process, we encode the inputs of our model with hierarchical encodings and remove irrelevant fragments with different indicator functions. We also study techniques to augment the limited training data with synthetic relevant textual fragments and transformed image fragments. The model is later fine-tuned by a limited set of small-scale image–text fragment pairs. We rank the test image fragments and noun phrases by their intermodal similarity in the learned common space. Extensive experiments demonstrate that our proposed models outperform two state-of-the-art methods adapted to fine-grained cross-modal retrieval of cultural items for two benchmark datasets. Full article
(This article belongs to the Special Issue Artificial Intelligence for Digital Humanities (AI4DH))
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop