Selected Papers from the 23rd International Conference on Computational Science and Its Applications (ICCSA 2023)

A special issue of Computers (ISSN 2073-431X).

Deadline for manuscript submissions: closed (15 November 2023) | Viewed by 11405

Special Issue Editors

Department of Mathematics and Computer Science, University of Perugia, 06123 Perugia, Italy
Interests: parallel and distributed systems; grid computing; cloud computing; virtual reality and scientific visualization; implementation of algorithms for molecular studies; multimedia and internet computing; e-learning
Special Issues, Collections and Topics in MDPI journals
Department of Mathematics and Computer Science, University of Florence, 50134 Florence, Italy
Interests: high performance computing; virtual reality; augmented reality; machine learning; cloud computing; web programming
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The 23rd International Conference on Computational Science and Its Applications (ICCSA 2023) will be held on July 3–6, 2023 in collaboration with the National Technical University of Athens and University of the Aegean, Athens, Greece. Computational science is a main pillar of most of the present research, industrial, and commercial activities, and plays a unique role in exploiting information and communication technologies as innovative technologies. The ICCSA Conference offers a real opportunity to discuss new issues, tackle complex problems and find advanced enabling solutions able to shape new trends in computational science. For more information, please visit the following link: http://www.iccsa.org/.

The authors of a number of selected high-quality full papers will be invited after the conference to submit revised and extended versions of their originally accepted conference papers to this Special Issue of Computers, published by MDPI, in open access format. The selection of these papers will be based on their ratings in the conference review process, the quality of the presentation during the conference, and the expected impact on the research community. Each submission to this Special Issue should contain at least 50% new material, e.g., in the form of technical extensions, more in-depth evaluations, or additional use cases and a change in the title, abstract, and keywords. These extended submissions will undergo a peer-review process according to the journal’s rules of action. At least two technical committees will act as reviewers for each extended article submitted to this Special Issue; if needed, additional external reviewers will be invited to guarantee a high-quality reviewing process.

Dr. Osvaldo Gervasi
Dr. Damiano Perri
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Computers is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

23 pages, 1230 KiB  
Article
Interpretable Software Defect Prediction from Project Effort and Static Code Metrics
Computers 2024, 13(2), 52; https://doi.org/10.3390/computers13020052 - 16 Feb 2024
Viewed by 277
Abstract
Software defect prediction models enable test managers to predict defect-prone modules and assist with delivering quality products. A test manager would be willing to identify the attributes that can influence defect prediction and should be able to trust the model outcomes. The objective [...] Read more.
Software defect prediction models enable test managers to predict defect-prone modules and assist with delivering quality products. A test manager would be willing to identify the attributes that can influence defect prediction and should be able to trust the model outcomes. The objective of this research is to create software defect prediction models with a focus on interpretability. Additionally, it aims to investigate the impact of size, complexity, and other source code metrics on the prediction of software defects. This research also assesses the reliability of cross-project defect prediction. Well-known machine learning techniques, such as support vector machines, k-nearest neighbors, random forest classifiers, and artificial neural networks, were applied to publicly available PROMISE datasets. The interpretability of this approach was demonstrated by SHapley Additive exPlanations (SHAP) and local interpretable model-agnostic explanations (LIME) techniques. The developed interpretable software defect prediction models showed reliability on independent and cross-project data. Finally, the results demonstrate that static code metrics can contribute to the defect prediction models, and the inclusion of explainability assists in establishing trust in the developed models. Full article
Show Figures

Figure 1

21 pages, 3058 KiB  
Article
A User-Centered Privacy Policy Management System for Automatic Consent on Cookie Banners
Computers 2024, 13(2), 43; https://doi.org/10.3390/computers13020043 - 01 Feb 2024
Viewed by 592
Abstract
Despite growing concerns about privacy and an evolution in laws protecting users’ rights, there remains a gap between how industries manage data and how users can express their preferences. This imbalance often favors industries, forcing users to repeatedly define their privacy preferences each [...] Read more.
Despite growing concerns about privacy and an evolution in laws protecting users’ rights, there remains a gap between how industries manage data and how users can express their preferences. This imbalance often favors industries, forcing users to repeatedly define their privacy preferences each time they access a new website. This process contributes to the privacy paradox. We propose a user support tool named the User Privacy Preference Management System (UPPMS) that eliminates the need for users to handle intricate banners or deceptive patterns. We have set up a process to guide even a non-expert user in creating a standardized personal privacy policy, which is automatically applied to every visited website by interacting with cookie banners. The process of generating actions to apply the user’s policy leverages customized Large Language Models. Experiments demonstrate the feasibility of analyzing HTML code to understand and automatically interact with cookie banners, even implementing complex policies. Our proposal aims to address the privacy paradox related to cookie banners by reducing information overload and decision fatigue for users. It also simplifies user navigation by eliminating the need to repeatedly declare preferences in intricate cookie banners on every visited website, while protecting users from deceptive patterns. Full article
Show Figures

Figure 1

23 pages, 13090 KiB  
Article
Integrated Visual Software Analytics on the GitHub Platform
Computers 2024, 13(2), 33; https://doi.org/10.3390/computers13020033 - 25 Jan 2024
Viewed by 917
Abstract
Readily available software analysis and analytics tools are often operated within external services, where the measured software analysis data are kept internally and no external access to the data is available. We propose an approach to integrate visual software analysis on the GitHub [...] Read more.
Readily available software analysis and analytics tools are often operated within external services, where the measured software analysis data are kept internally and no external access to the data is available. We propose an approach to integrate visual software analysis on the GitHub platform by leveraging GitHub Actions and the GitHub API, covering both analysis and visualization. The process is to perform software analysis for each commit, e.g., static source code complexity metrics, and augment the commit using the resulting data, stored as git objects within the same repository. We show that this approach is feasible by integrating it into 64 open source TypeScript projects. Furthermore, we analyze the impact on Continuous Integration (CI) run time and repository storage. The stored software analysis data are externally accessible to allow for visualization tools, such as software maps. The effort to integrate our approach is limited to enabling the analysis component within a project’s CI on GitHub and embed an HTML snippet into the project’s website for visualization. This enables a large amount of projects to have access to software analysis as well as provide means to communicate the current status of a project. Full article
Show Figures

Figure 1

14 pages, 283 KiB  
Article
A Comparative Study of Commit Representations for JIT Vulnerability Prediction
Computers 2024, 13(1), 22; https://doi.org/10.3390/computers13010022 - 11 Jan 2024
Viewed by 723
Abstract
With the evolution of software systems, their size and complexity are rising rapidly. Identifying vulnerabilities as early as possible is crucial for ensuring high software quality and security. Just-in-time (JIT) vulnerability prediction, which aims to find vulnerabilities at the time of commit, has [...] Read more.
With the evolution of software systems, their size and complexity are rising rapidly. Identifying vulnerabilities as early as possible is crucial for ensuring high software quality and security. Just-in-time (JIT) vulnerability prediction, which aims to find vulnerabilities at the time of commit, has increasingly become a focus of attention. In our work, we present a comparative study to provide insights into the current state of JIT vulnerability prediction by examining three candidate models: CC2Vec, DeepJIT, and Code Change Tree. These unique approaches aptly represent the various techniques used in the field, allowing us to offer a thorough description of the current limitations and strengths of JIT vulnerability prediction. Our focus was on the predictive power of the models, their usability in terms of false positive (FP) rates, and the granularity of the source code analysis they are capable of handling. For training and evaluation, we used two recently published datasets containing vulnerability-inducing commits: ProjectKB and Defectors. Our results highlight the trade-offs between predictive accuracy and operational flexibility and also provide guidance on the use of ML-based automation for developers, especially considering false positive rates in commit-based vulnerability prediction. These findings can serve as crucial insights for future research and practical applications in software security. Full article
Show Figures

Figure 1

25 pages, 28064 KiB  
Article
Comparative GIS Analysis of Public Transport Accessibility in Metropolitan Areas
Computers 2023, 12(12), 260; https://doi.org/10.3390/computers12120260 - 15 Dec 2023
Viewed by 1200
Abstract
With urban areas facing rapid population growth, public transport plays a key role to provide efficient and economic accessibility to the residents. It reduces the use of personal vehicles leading to reduced traffic congestion on roads and reduced pollution. To assess the performance [...] Read more.
With urban areas facing rapid population growth, public transport plays a key role to provide efficient and economic accessibility to the residents. It reduces the use of personal vehicles leading to reduced traffic congestion on roads and reduced pollution. To assess the performance of these transport systems, prior studies have taken into consideration the blank spot areas, population density, and stop access density; however, very little research has been performed to compare the accessibility between cities using a GIS-based approach. This paper compares the access and performance of public transport across Melbourne and Sydney, two cities with a similar size, population, and economy. The methodology uses spatial PostGIS queries to focus on accessibility-based approach for each residential mesh block and aggregates the blank spots, and the number of services offered by time of day and the frequency of services at the local government area (LGA) level. The results of the study reveal an interesting trend: that with increase in distance of LGA from city centre, the blank spot percentage increases while the frequency of services and stops offering weekend/night services declines. The results conclude that while Sydney exhibits a lower percentage of blank spots and has better coverage, performance in terms of accessibility by service time and frequency is better for Melbourne’s LGAs, even as the distance increases from the city centre. Full article
Show Figures

Figure 1

14 pages, 441 KiB  
Article
Optimizing Hardware Resource Utilization for Accelerating the NTRU-KEM Algorithm
Computers 2023, 12(12), 259; https://doi.org/10.3390/computers12120259 - 13 Dec 2023
Viewed by 1102
Abstract
This paper focuses on enhancing the performance of the Nth-degree truncated-polynomial ring units key encapsulation mechanism (NTRU-KEM) algorithm, which ensures post-quantum resistance in the field of key establishment cryptography. The NTRU-KEM, while robust, suffers from increased storage and computational demands compared to [...] Read more.
This paper focuses on enhancing the performance of the Nth-degree truncated-polynomial ring units key encapsulation mechanism (NTRU-KEM) algorithm, which ensures post-quantum resistance in the field of key establishment cryptography. The NTRU-KEM, while robust, suffers from increased storage and computational demands compared to classical cryptography, leading to significant memory and performance overheads. In environments with limited resources, the negative impacts of these overheads are more noticeable, leading researchers to investigate ways to speed up processes while also ensuring they are efficient in terms of area utilization. To address this, our research carefully examines the detailed functions of the NTRU-KEM algorithm, adopting a software/hardware co-design approach. This approach allows for customized computation, adapting to the varying requirements of operational timings and iterations. The key contribution is the development of a novel hardware acceleration technique focused on optimizing bus utilization. This technique enables parallel processing of multiple sub-functions, enhancing the overall efficiency of the system. Furthermore, we introduce a unique integrated register array that significantly reduces the spatial footprint of the design by merging multiple registers within the accelerator. In experiments conducted, the results of our work were found to be remarkable, with a time-area efficiency achieved that surpasses previous work by an average of 25.37 times. This achievement underscores the effectiveness of our optimization in accelerating the NTRU-KEM algorithm. Full article
Show Figures

Figure 1

14 pages, 639 KiB  
Article
B-PSA: A Binary Pendulum Search Algorithm for the Feature Selection Problem
Computers 2023, 12(12), 249; https://doi.org/10.3390/computers12120249 - 29 Nov 2023
Viewed by 1082
Abstract
The digitization of information and technological advancements have enabled us to gather vast amounts of data from various domains, including but not limited to medicine, commerce, and mining. Machine learning techniques use this information to improve decision-making, but they have a big problem: [...] Read more.
The digitization of information and technological advancements have enabled us to gather vast amounts of data from various domains, including but not limited to medicine, commerce, and mining. Machine learning techniques use this information to improve decision-making, but they have a big problem: they are very sensitive to data variation, so it is necessary to clean them to remove irrelevant and redundant information. This removal of information is known as the Feature Selection Problem. This work presents the Pendulum Search Algorithm applied to solve the Feature Selection Problem. As the Pendulum Search Algorithm is a metaheuristic designed for continuous optimization problems, a binarization process is performed using the Two-Step Technique. Preliminary results indicate that our proposal obtains competitive results when compared to other metaheuristics extracted from the literature, solving well-known benchmarks. Full article
Show Figures

Figure 1

16 pages, 652 KiB  
Article
Credit Risk Prediction Based on Psychometric Data
Computers 2023, 12(12), 248; https://doi.org/10.3390/computers12120248 - 28 Nov 2023
Viewed by 1665
Abstract
In today’s financial landscape, traditional banking institutions rely extensively on customers’ historical financial data to evaluate their eligibility for loan approvals. While these decision support systems offer predictive accuracy for established customers, they overlook a crucial demographic: individuals without a financial history. To [...] Read more.
In today’s financial landscape, traditional banking institutions rely extensively on customers’ historical financial data to evaluate their eligibility for loan approvals. While these decision support systems offer predictive accuracy for established customers, they overlook a crucial demographic: individuals without a financial history. To address this gap, our study presents a methodology for a decision support system that is intended to assist in determining credit risk. Rather than solely focusing on past financial records, our methodology assesses customer credibility by generating credit risk scores derived from psychometric test results. Utilizing machine learning algorithms, we model customer credibility through multidimensional metrics such as character traits and attitudes toward money management. Preliminary results from our prototype testing indicate that this innovative approach holds promise for accurate risk assessment. Full article
Show Figures

Figure 1

17 pages, 2738 KiB  
Article
Analyzing the Spread of Misinformation on Social Networks: A Process and Software Architecture for Detection and Analysis
Computers 2023, 12(11), 232; https://doi.org/10.3390/computers12110232 - 14 Nov 2023
Viewed by 1935
Abstract
The rapid dissemination of misinformation on social networks, particularly during public health crises like the COVID-19 pandemic, has become a significant concern. This study investigates the spread of misinformation on social network data using social network analysis (SNA) metrics, and more generally by [...] Read more.
The rapid dissemination of misinformation on social networks, particularly during public health crises like the COVID-19 pandemic, has become a significant concern. This study investigates the spread of misinformation on social network data using social network analysis (SNA) metrics, and more generally by using well known network science metrics. Moreover, we propose a process design that utilizes social network data from Twitter, to analyze the involvement of non-trusted accounts in spreading misinformation supported by a proof-of-concept prototype. The proposed prototype includes modules for data collection, data preprocessing, network creation, centrality calculation, community detection, and misinformation spreading analysis. We conducted an experimental study on a COVID-19-related Twitter dataset using the modules. The results demonstrate the effectiveness of our approach and process steps, and provides valuable insight into the application of network science metrics on social network data for analysing various influence-parameters in misinformation spreading. Full article
Show Figures

Figure 1

14 pages, 525 KiB  
Article
Moving towards a Mutant-Based Testing Tool for Verifying Behavior Maintenance in Test Code Refactorings
Computers 2023, 12(11), 230; https://doi.org/10.3390/computers12110230 - 13 Nov 2023
Viewed by 1113
Abstract
Evaluating mutation testing behavior can help decide whether refactoring successfully maintains the expected initial test results. Moreover, manually performing this analytical work is both time-consuming and prone to errors. This paper extends an approach to assess test code behavior and proposes a tool [...] Read more.
Evaluating mutation testing behavior can help decide whether refactoring successfully maintains the expected initial test results. Moreover, manually performing this analytical work is both time-consuming and prone to errors. This paper extends an approach to assess test code behavior and proposes a tool called MeteoR. This tool comprises an IDE plugin to detect issues that may arise during test code refactoring, reducing the effort required to perform evaluations. A preliminary assessment was conducted to validate the tool and ensure the proposed test code refactoring approach is adequate. By analyzing not only the mutation score but also the generated mutants in the pre- and post-refactoring process, results show that the approach is capable of checking whether the behavior of the mutants remains unchanged throughout the refactoring process. This proposal represents one more step toward the practice of test code refactoring. It can improve overall software quality, allowing developers and testers to safely refactor the test code in a scalable and automated way. Full article
Show Figures

Figure 1

Back to TopTop