Special Issue "Artificial Intelligence for the Cloud Continuum"
Deadline for manuscript submissions: 30 June 2023 | Viewed by 4164
Interests: decision support systems; artificial intelligence; information systems
Special Issues, Collections and Topics in MDPI journals
Interests: cloud computing; edge computing; multiclouds; cloud security; topology and orchestration management for cloud applications
Special Issues, Collections and Topics in MDPI journals
When compared to on-premise information systems, systems which leverage the cloud continuum of computing, which includes public clouds, private clouds, as well as fog and edge computing resources, offer more advantages economically, operationally, and functionally. The various economic benefits of cloud computing are flexibility in cost, which has the potential to reduce the overall cost. Moreover, the initial heavy capital investments on IT resources can be avoided. Additionally, the cost involved in deploying and maintaining these IT resources can be reduced, especially with emerging intelligent methods and algorithms supporting the tasks of the DevOps. From an operational perspective, the services provided by cloud computing offer higher scalability.
The opportunities and potential advantages of the cloud continuum come with new challenges. Orchestration and scheduling in the cloud continuum is a complex problem that needs to be intelligent enough to guarantee responses upon the uncertainty of the runtime environment, requiring efficient resource management of limited, highly segregated, and relatively unreliable processing and storage resources. Moreover, the edge side of the cloud computing continuum contains nodes that typically have resource and hardware limitations. Additionally, edge nodes are generally heterogeneous with different characteristics, such as various amounts of computing power, memory size, storage capacity, and network bandwidth. Capacity-limited resources of the computing continuum have to ensure the availability of the service regardless of the number of end-users’ client devices, as well as to guarantee its quality of service (QoS).
To take advantage of the opportunities of the cloud continuum, next-generation applications and services that employ a cloud execution model, in which deployment is made on interconnected, heterogeneous computing resources that span the cloud continuum, are increasingly developed using lightweight virtualization approaches. With microservices, for example, an application may comprise many microservices. Only the microservices deployed on a server with resource constraints need to be scaled out, thus providing resource and cost optimization benefits. The deployment and management of next-generation applications and services on the cloud continuum poses new challenges to the DevOps which, at the same time, are opportunities for developing new AI methods, algorithms, and tools that can support DevOps operations and decision making.
This Special Issue will bring researchers, academicians, and practitioners together to discuss new AI methods, algorithms, and tools, as well as new applications of AI for overcoming the challenges of deploying and managing next-generation applications and services on the cloud continuum. Topics of interest include (but are not limited to):
- AI for DevOps operations on the cloud continuum;
- AI for Function as a service (FaaS) provisioning, deployment, and management;
- AI in cloud serverless architectures;
- AI for secure communications management at the cloud continuum;
- AI for data portability, privacy, and security for preserving the confidentiality of data and protecting users’ privacy within the cloud computing continuum;
- Intelligent techniques to facilitate secure data portability and computation on sensitive data across heterogeneous resources;
- AI for adaptive deployment on heterogeneous computation resources;
- Intelligent methods for prediction, detection, and reaction on changes in the workloads;
- AI for application lifecycle support and optimization;
- Intelligent workflows, monitoring of execution platforms, application deployment, adaptation, and optimization of execution;
- Autonomic fault recovery of cloud continuum computing resources;
- AI for scalability and QoS at the cloud continuum;
- AI for optimizing energy efficiency of cloud continuum infrastructures.
Assoc. Pro Dimitris Apostolou
Dr. Yiannis Verginadis
Prof. Dr. Gregoris Mentzas
Manuscript Submission Information
Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.
Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed open access monthly journal published by MDPI.
Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.