Computational Science and Its Applications 2025 (ICCSA 2025)

A special issue of Computers (ISSN 2073-431X).

Deadline for manuscript submissions: closed (30 April 2026) | Viewed by 5935

Special Issue Editor


E-Mail Website
Guest Editor
Department of Mathematics and Computer Science, University of Perugia, 06123 Perugia, Italy
Interests: parallel and distributed systems; grid computing; cloud computing; virtual reality and scientific visualization; implementation of algorithms for molecular studies; multimedia and internet computing; e-learning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The 25th International Conference on Computational Science and Its Applications (ICCSA 2025) was held on June 30–July 3, 2025 in collaboration with Galatasaray University, Istanbul, Türkiye. Computational science is one of the main pillars of most of the present research in industrial and commercial activities, and plays a unique role in exploiting information and communication technologies as innovative technologies. The ICCSA offers a real opportunity to discuss new issues, tackle complex problems, and find advanced solutions that are able to shape new trends in computational science. For more information, please visit the following link: http://www.iccsa.org/.

The authors of a number of selected high-quality full papers will be invited after the conference to submit revised and extended versions of their originally accepted conference papers to this Special Issue of Computers, published by MDPI, in open access format. The selection of these papers will be based on their ratings in the conference review process, the quality of the presentation during the conference, and the expected impact on the research community. Each submission to this Special Issue should contain at least 50% new material, e.g., in the form of technical extensions, more in-depth evaluations, or additional use cases or a change in the title, abstract, or keywords. These extended submissions will undergo a peer-review process according to the journal’s rules of action.

We also encourage original research work related to the topic of computer science. Topics of interest for this Special Issue include, but are not limited to, the following subjects:

  • High-Performance Computing and Networks’ Parallel and Distributed Computing:
    • Cluster Computing;
    • Supercomputing;
    • Cloud Computing;
    • Autonomic Computing;
    • P2P Computing;
    • Mobile Computing;
    • Grid and Semantic Grid Computing;
    • Workflow Design and Practice;
    • Computer and Network Architecture.
  • Geometric Modelling, Graphics, and Visualization:
    • Scientific Visualization;
    • Computer Graphics;
    • Geometric Modelling;
    • Pattern Recognition;
    • Image Processing;
    • CAD/CAM;
    • Web3D, Virtual and Augmented Reality.
  • Information Systems and Technologies:
    • Information Retrieval;
    • Scientific Databases;
    • Security Engineering;
    • Risk Analysis;
    • Reliability Engineering;
    • Software Engineering;
    • Data Mining;
    • Artificial Intelligence;
    • Machine Learning;
    • Learning Technologies;
    • Web-Based Computing;
    • Web 2.0;
    • Blockchain.

Dr. Osvaldo Gervasi
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Computers is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • high-performance computing and networks parallel and distributed computing
  • geometric modelling, graphics and visualization
  • information systems and technologies

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

52 pages, 640 KB  
Article
xjb: Fast Float to String Algorithm
by Junbo Xiang and Tiejun Wang
Computers 2026, 15(5), 280; https://doi.org/10.3390/computers15050280 - 27 Apr 2026
Viewed by 205
Abstract
Efficiently and accurately converting floating-point numbers to decimal strings remains a fundamental challenge in numerical computation, data serialization, and human–computer interaction. While modern algorithms such as Ryū, Dragonbox, and Schubfach rigorously satisfy the Steele–White criteria for correctness and minimal output length, their performance [...] Read more.
Efficiently and accurately converting floating-point numbers to decimal strings remains a fundamental challenge in numerical computation, data serialization, and human–computer interaction. While modern algorithms such as Ryū, Dragonbox, and Schubfach rigorously satisfy the Steele–White criteria for correctness and minimal output length, their performance is frequently constrained by branch mispredictions, high-precision multiplication overhead, and suboptimal utilization of instruction-level parallelism. This paper introduces xjb, a novel floating-point–string conversion algorithm derived from Schubfach that systematically overcomes these bottlenecks. By restructuring the core computation to reduce instruction dependencies, adopting branchless decision logic, and exploiting SIMD instruction sets for decimal-to-ASCII formatting, xjb delivers state-of-the-art throughput across diverse hardware platforms. The algorithm requires only a single 64-by-128-bit multiplication for IEEE 754 binary64 conversions and a single 64-by-64-bit multiplication for binary32, drastically decreasing arithmetic complexity. Extensive benchmarking on AMD R7-7840H and Apple M1/M5 processors demonstrates that xjb consistently outperforms leading contemporary implementations. Notably, on the Apple M5, xjb achieves speedups of approximately 20% and 136% for binary64 and binary32 conversions, respectively, when compared to the highly optimized zmij library. The algorithm is fully compliant with the Steele–White principle; exhaustive validation over the entire binary32 space and extensive random testing across the binary64 range confirm both its theoretical soundness and practical robustness. Full article
(This article belongs to the Special Issue Computational Science and Its Applications 2025 (ICCSA 2025))
Show Figures

Figure 1

27 pages, 4141 KB  
Article
Case Studies on the Logical Structure of the Algorithms Tabu Search and Threshold Accepting for Generating Solutions in Searching and Solving the Bin-Packing Problem
by Vanesa Landero-Nájera, Joaquín Pérez-Ortega, Laura Cruz-Reyes, Claudia Guadalupe Gómez-Santillán, Nelva N. Almanza-Ortega, Carlos Rodríguez-Orta and Carlos Andrés Collazos-Morales
Computers 2026, 15(5), 274; https://doi.org/10.3390/computers15050274 - 24 Apr 2026
Viewed by 253
Abstract
The logical structure of approximation algorithms has been identified by the scientific community in four principal parts: tuning parameters, generating initial solutions, generating neighbor solutions, and stopping algorithm execution. A review of the literature specifically for the algorithms Threshold Accepting (TA) and Tabu [...] Read more.
The logical structure of approximation algorithms has been identified by the scientific community in four principal parts: tuning parameters, generating initial solutions, generating neighbor solutions, and stopping algorithm execution. A review of the literature specifically for the algorithms Threshold Accepting (TA) and Tabu Search (TS) indicates that, in most cases, choices are performed on one or several of these logical parts, often implicitly guided by expert knowledge for improving algorithm performance. However, these design choices, particularly in the selection of initialization and neighborhood strategies, are rarely analyzed in a systematic and reproducible manner. A formal experimental framework is presented to systematically analyze logical structure design choices, which are typically based on empirical expertise, by isolating and evaluating the combined effects of methodologies in the logical parts of initialization and neighborhood under controlled conditions of TA and TS algorithms in solving the one-dimensional Bin Packing Problem (BPP). A total of 324 benchmark instances were used to assess multiple algorithmic variants. Performance was evaluated in terms of solution quality and computational effort, supported by graphical analysis and statistical methods, including Wilcoxon signed-rank tests, effect size measures, bootstrap-based confidence intervals, and linear regression. The experimental results consistently show that the simpler internal logical structure of TA and TS algorithms, specifically with a probability-guided initialization combined with a single neighborhood operator, can achieve a better balance between solution quality and computational effort compared to more complex alternatives in general instances of BPP. Full article
(This article belongs to the Special Issue Computational Science and Its Applications 2025 (ICCSA 2025))
Show Figures

Figure 1

20 pages, 5154 KB  
Article
A DSS Methodology for Emergency Management: Preliminary Application to the Municipality of Amatrice (Italy)
by Cristina Montaldi, Annamaria Felli, Vanessa Tomei and Francesco Zullo
Computers 2026, 15(3), 153; https://doi.org/10.3390/computers15030153 - 2 Mar 2026
Viewed by 429
Abstract
The increasing exposure of dispersed rural settlements to natural and infrastructural risks highlights the need for structured and reproducible territorial information layers capable of supporting future decision-making processes. To this end, a rigorous characterization of settlement nodes and their structural attributes is essential. [...] Read more.
The increasing exposure of dispersed rural settlements to natural and infrastructural risks highlights the need for structured and reproducible territorial information layers capable of supporting future decision-making processes. To this end, a rigorous characterization of settlement nodes and their structural attributes is essential. This article represents a first exploratory application of the proposed methodology and constitutes an initial phase of its implementation. The objective is not to provide a definitive or exhaustive model, but rather to test the underlying theoretical framework through a preliminary experimentation aimed at verifying its internal coherence, replicability, and operational potential. In this initial stage, the methodology is applied to demonstrate concretely what types of information can be systematically collected and how an urban center can be characterized in terms of accessibility and its role within the broader territorial system. The methodology is applied to the municipality of Amatrice as a case study representative of highly fragmented inner-area settlements. This first implementation highlights the potential of the approach, allows for the identification of possible methodological criticalities, and lays the groundwork for more advanced and structured future developments. The contribution therefore constitutes a foundational analytical layer aimed at organizing territorial information in a structured form and providing a coherent basis for future analyses and territorial and emergency management strategies. Full article
(This article belongs to the Special Issue Computational Science and Its Applications 2025 (ICCSA 2025))
Show Figures

Figure 1

24 pages, 401 KB  
Article
A Multimodal Transformer-Based Framework for Emotion Analysis in Multilingual Video Content
by Sehmus Yakut, Yusuf Taha Tuten, Eren Caglar and Mehmet S. Aktas
Computers 2026, 15(2), 77; https://doi.org/10.3390/computers15020077 - 1 Feb 2026
Viewed by 1096
Abstract
This research addresses the challenge of inferring complex psychological states, including stress, fatigue, anxiety, cognitive load, and boredom, from facial expressions. We propose an interpretable, literature-informed emotion-weighting methodology that transforms the eight-emotion probability outputs of facial emotion recognition models into continuous estimates of [...] Read more.
This research addresses the challenge of inferring complex psychological states, including stress, fatigue, anxiety, cognitive load, and boredom, from facial expressions. We propose an interpretable, literature-informed emotion-weighting methodology that transforms the eight-emotion probability outputs of facial emotion recognition models into continuous estimates of these five psychological states using weights derived from the Valence–Arousal framework, providing a principled bridge between discrete emotion predictions and higher-level affective constructs. The proposed formulation is evaluated across six representative deep learning architectures—a baseline CNN (ResNet-50), a modern CNN (ConvNeXt), a hybrid attention-based model (DDAMFN), and three Transformer-based models (ViT, BEiT, and Swin). Our results demonstrate that strong performance on discrete FER tasks does not directly translate to consistent behavior in complex state inference; instead, architectures capable of preserving subtle and distributed affective cues yield more stable and interpretable state estimates, with DDAMFN and Vision Transformer models exhibiting the most consistent performance across the evaluated psychological states. These findings highlight the central role of the proposed emotion-weighting formulation and the importance of architecture selection beyond categorical accuracy in complex affective state analysis. Full article
(This article belongs to the Special Issue Computational Science and Its Applications 2025 (ICCSA 2025))
Show Figures

Figure 1

24 pages, 3292 KB  
Article
Comparing Emerging and Hybrid Quantum–Kolmogorov Architectures for Image Classification
by Lelio Campanile, Mariarosaria Castaldo, Stefano Marrone and Fabio Napoli
Computers 2026, 15(1), 65; https://doi.org/10.3390/computers15010065 - 16 Jan 2026
Cited by 1 | Viewed by 782
Abstract
The rapid evolution of Artificial Intelligence has enabled significant progress in image classification, with emerging approaches extending traditional deep learning paradigms. This article presents an extended version of a paper originally introduced at ICCSA 2025, providing a broader comparative analysis of classical, spline-based, [...] Read more.
The rapid evolution of Artificial Intelligence has enabled significant progress in image classification, with emerging approaches extending traditional deep learning paradigms. This article presents an extended version of a paper originally introduced at ICCSA 2025, providing a broader comparative analysis of classical, spline-based, and quantum machine learning architectures. The study evaluates Convolutional Neural Networks (CNNs), Kolmogorov–Arnold Networks (KANs), Convolutional KANs (CKANs), and Quantum Convolutional Neural Networks (QCNNs) on the Labeled Faces in the Wild dataset. In addition to these baselines, two novel architectures are introduced: a fully quantum Kolmogorov–Arnold model (F-QKAN) and a hybrid KAN–Quantum network (H-QKAN) that combines spline-based feature extraction with variational quantum classification. Rather than targeting state-of-the-art performance, the evaluation focuses on analyzing the behaviour of these architectures in terms of accuracy, computational efficiency, and interpretability under a unified experimental protocol. Results show that the fully quantum F-QKAN achieves a test accuracy above 80%. The hybrid H-QKAN obtains the best overall performance, exceeding 92% accuracy with rapid convergence and stable training dynamics. Classical CNNs models remain state-of-the-art in terms of predictive performance, whereas CKANs offer a favorable balance between accuracy and efficiency. QCNNs show potential in ideal noise-free settings but are significantly affected by realistic noise conditions, motivating further investigation into hybrid quantum–classical designs. Full article
(This article belongs to the Special Issue Computational Science and Its Applications 2025 (ICCSA 2025))
Show Figures

Figure 1

19 pages, 2837 KB  
Article
An Open-Source System for Public Transport Route Data Curation Using OpenTripPlanner in Australia
by Kiki Adhinugraha, Yusuke Gotoh and David Taniar
Computers 2026, 15(1), 58; https://doi.org/10.3390/computers15010058 - 14 Jan 2026
Cited by 1 | Viewed by 1093
Abstract
Access to large-scale public transport journey data is essential for analysing accessibility, equity, and urban mobility. Although digital platforms such as Google Maps provide detailed routing for individual users, their licensing and access restrictions prevent systematic data extraction for research purposes. Open-source routing [...] Read more.
Access to large-scale public transport journey data is essential for analysing accessibility, equity, and urban mobility. Although digital platforms such as Google Maps provide detailed routing for individual users, their licensing and access restrictions prevent systematic data extraction for research purposes. Open-source routing engines such as OpenTripPlanner offer a transparent alternative, but are often limited to local or technical deployments that restrict broader use. This study evaluates the feasibility of deploying a publicly accessible, open-source routing platform based on OpenTripPlanner to support large-scale public transport route simulation across multiple cities. Using Australian metropolitan areas as a case study, the platform integrates GTFS and OpenStreetMap data to enable repeatable journey queries through a web interface, an API, and bulk processing tools. Across eight metropolitan regions, the system achieved itinerary coverage above 90 percent and sustained approximately 3000 routing requests per minute under concurrent access. These results demonstrate that open-source routing infrastructure can support reliable, large-scale route simulation using open data. Beyond performance, the platform enables public transport accessibility studies that are not feasible with proprietary routing services, supporting reproducible research, transparent decision-making, and evidence-based transport planning across diverse urban contexts. Full article
(This article belongs to the Special Issue Computational Science and Its Applications 2025 (ICCSA 2025))
Show Figures

Graphical abstract

13 pages, 1149 KB  
Article
Monitoring IoT and Robotics Data for Sustainable Agricultural Practices Using a New Edge–Fog–Cloud Architecture
by Mohamed El-Ouati, Sandro Bimonte and Nicolas Tricot
Computers 2026, 15(1), 32; https://doi.org/10.3390/computers15010032 - 7 Jan 2026
Viewed by 958
Abstract
Modern agricultural operations generate high-volume and diverse data (historical and stream) from various sources, including IoT devices, robots, and drones. This paper presents a novel smart farming architecture specifically designed to efficiently manage and process this complex data landscape.The proposed architecture comprises five [...] Read more.
Modern agricultural operations generate high-volume and diverse data (historical and stream) from various sources, including IoT devices, robots, and drones. This paper presents a novel smart farming architecture specifically designed to efficiently manage and process this complex data landscape.The proposed architecture comprises five distinct, interconnected layers: The Source Layer, the Ingestion Layer, the Batch Layer, the Speed Layer, and the Governance Layer. The Source Layer serves as the unified entry point, accommodating structured, spatial, and image data from sensors, Drones, and ROS-equipped robots. The Ingestion Layer uses a hybrid fog/cloud architecture with Kafka for real-time streams and for batch processing of historical data. Data is then segregated for processing: The cloud-deployed Batch Layer employs a Hadoop cluster, Spark, Hive, and Drill for large-scale historical analysis, while the Speed Layer utilizes Geoflink and PostGIS for low-latency, real-time geovisualization. Finally, the Governance Layer guarantees data quality, lineage, and organization across all components using Open Metadata. This layered, hybrid approach provides a scalable and resilient framework capable of transforming raw agricultural data into timely, actionable insights, addressing the critical need for advanced data management in smart farming. Full article
(This article belongs to the Special Issue Computational Science and Its Applications 2025 (ICCSA 2025))
Show Figures

Figure 1

Back to TopTop