Special Issue "Artificial Intelligence for the Cloud Continuum"
Deadline for manuscript submissions: 31 August 2021.
Interests: cloud computing; edge computing; multiclouds; cloud security; topology and orchestration management for cloud applications
Special Issues and Collections in MDPI journals
Interests: decision support systems; social computing; knowledge management; e-governance; web science
When compared to on-premise information systems, systems which leverage the cloud continuum of computing, which includes public clouds, private clouds, as well as fog and edge computing resources, offer more advantages economically, operationally, and functionally. The various economic benefits of cloud computing are flexibility in cost, which has the potential to reduce the overall cost. Moreover, the initial heavy capital investments on IT resources can be avoided. Additionally, the cost involved in deploying and maintaining these IT resources can be reduced, especially with emerging intelligent methods and algorithms supporting the tasks of the DevOps. From an operational perspective, the services provided by cloud computing offer higher scalability.
The opportunities and potential advantages of the cloud continuum come with new challenges. Orchestration and scheduling in the cloud continuum is a complex problem that needs to be intelligent enough to guarantee responses upon the uncertainty of the runtime environment, requiring efficient resource management of limited, highly segregated, and relatively unreliable processing and storage resources. Moreover, the edge side of the cloud computing continuum contains nodes that typically have resource and hardware limitations. Additionally, edge nodes are generally heterogeneous with different characteristics, such as various amounts of computing power, memory size, storage capacity, and network bandwidth. Capacity-limited resources of the computing continuum have to ensure the availability of the service regardless of the number of end-users’ client devices, as well as to guarantee its quality of service (QoS).
To take advantage of the opportunities of the cloud continuum, next-generation applications and services that employ a cloud execution model, in which deployment is made on interconnected, heterogeneous computing resources that span the cloud continuum, are increasingly developed using lightweight virtualization approaches. With microservices, for example, an application may comprise many microservices. Only the microservices deployed on a server with resource constraints need to be scaled out, thus providing resource and cost optimization benefits. The deployment and management of next-generation applications and services on the cloud continuum poses new challenges to the DevOps which, at the same time, are opportunities for developing new AI methods, algorithms, and tools that can support DevOps operations and decision making.
This Special Issue will bring researchers, academicians, and practitioners together to discuss new AI methods, algorithms, and tools, as well as new applications of AI for overcoming the challenges of deploying and managing next-generation applications and services on the cloud continuum. Topics of interest include (but are not limited to):
- AI for DevOps operations on the cloud continuum;
- AI for Function as a service (FaaS) provisioning, deployment, and management;
- AI in cloud serverless architectures;
- AI for secure communications management at the cloud continuum;
- AI for data portability, privacy, and security for preserving the confidentiality of data and protecting users’ privacy within the cloud computing continuum;
- Intelligent techniques to facilitate secure data portability and computation on sensitive data across heterogeneous resources;
- AI for adaptive deployment on heterogeneous computation resources;
- Intelligent methods for prediction, detection, and reaction on changes in the workloads;
- AI for application lifecycle support and optimization;
- Intelligent workflows, monitoring of execution platforms, application deployment, adaptation, and optimization of execution;
- Autonomic fault recovery of cloud continuum computing resources;
- AI for scalability and QoS at the cloud continuum;
- AI for optimizing energy efficiency of cloud continuum infrastructures.
Assoc. Pro Dimitris Apostolou
Dr. Yiannis Verginadis
Prof. Dr. Gregoris Mentzas
Manuscript Submission Information
Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.
Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed open access monthly journal published by MDPI.
Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.
The below list represents only planned manuscripts. Some of these manuscripts have not been received by the Editorial Office yet. Papers submitted to MDPI journals are subject to peer-review.
Title: Microservices within self-adaptive fog computing frameworks
Authors: Salman Taherizadeh; Dimitris Apostolou; Yiannis Verginadis; Marko Grobelnik
Affiliation: Department of Informatics, University of Piraeus, Karaoli & Dimitriou 80,Piraeus 18534, Greece
Abstract: The rapid growth of new computing models such as fog computing frameworks has a great impact on the adoption of microservices, especially in dynamic environments where the amount of workload varies over time or when Internet of Things (IoT) devices dynamically change their geographic location. In order to exploit the true potential of fog computing applications, it is essential to use a comprehensive set of various intricate technologies together. This complex blend of technologies currently raises interoperability problems in such modern computing frameworks. Therefore, a comprehensive ontology is required to unambiguously specify notions of various concepts employed in self- adaptive fog computing applications. The goal of the present paper is therefore twofold: (i) offering a new ontology, which allows an easier understanding of microservices within adaptive fog comput- ing frameworks, and (ii) presenting the latest open standards and tools which are now widely used to implement each class defined in our proposed ontology. The proposed ontology has been exploited in the PrEstoCloud project with the purpose of offering self-adaptive applications within fog computing frameworks.
Title: Optimization and prediction techniques for self-healing and self-learning applications in a trustworthy Cloud Continuum
Authors: Juncal Alonso; Leire Orue-Echevarria
Affiliation: TECNALIA, Basque Research and Technology Alliance (BRTA), Parque Científico y Tecnológico de Bizkaia, Astondo bidea,700, E-48160 Derio, Spain
Abstract: The current IT market is more and more dominated by the “Cloud Continuum” where increasing ubiquity and pervasiveness of compute capabilities and data availability have resulted in the proliferation of complex applications which effectively process data from heterogenous digital sources in a timely manner and seamlessly combine real-time data with complex models and data analytics to monitor and manage systems of interest. Traditional approaches that rely on moving data to remote data centers for processing are no longer feasible. Instead, new approaches that effectively leverage distributed computational infrastructure and services are necessary. Specifically, these approaches must seamlessly combine resources and services at the edge (edge computing), in the core (cloud computing), and along the data path (fog computing) as needed, through a trusted Cloud continuum. This requires novel solutions for federating trustworthy infrastructure, programming applications and services, and composing dynamic workflows, which are capable of reacting in real-time to unpredictable data sizes, availability, locations, and rates. In this paper we analyse how AI based techniques and tools can enhance the operation of complex applications to support the broad and multi-stage heterogeneity of the “Computing Continuum”. To this extent the presented work proposes a set of tools, methods and techniques for applications’ operators to seamlessly select, combine, configure and adapt computation resources all along the data path and support the complete service lifecycle covering: 1) optimized distributed application deployment over heterogenous computing resources, 2) monitoring of execution platforms in real-time including continuous control and trust of the infrastructural services, 3) application deployment and adaptation while optimising the execution 4) application self-healing to avoid compromising situations that may lead into an unexpected failure.