1. Introduction
This Special Issue presents some of the most recent innovations in cloud-native software and system engineering practices providing a broad and well-grounded picture of what the more and more frequently used term “Cloud-native” is currently used for. The contributions also address ad hoc approaches that became necessary as temporary measures in the context of the COVID-19 pandemic and that substantially changed remote work and gather research from different disciplines and methodological backgrounds to discuss new ideas, research questions, recent results, and future challenges in the emerging area of cloud-native applications. Therefore, all papers cover diverse aspects, such as cloud-based data collaboratives, the adoption of cloud computing for high-performance computing, the intelligent and autonomous management of cloud-native networks and even cloud-native opportunities for volunteer computing.
Even small companies can generate enormous economic growth and business value by providing cloud-based services or applications. Instagram, Uber, Airbnb, DropBox, WhatsApp, NetFlix, Zoom, and many more astonishing small companies all had very modest headcounts in their early days. However, these “cloud-native” enterprises have all had a remarkable economic and social impact just a few years later. What is more, these companies changed the style of how large-scale applications are being built today. What these companies have in common is their cloud-first approach. They intentionally make use of cloud resources. These companies can scale their services globally as quickly as needed. In times of the worldwide COVID-19 shutdowns, these “cloud-native” companies have emerged as an essential and unaware backbone that can keep even large economies (at least partly) operating. Services by these cloud-native companies enabled overnight established remote working opportunities for company staff that found themselves suddenly working in home offices. These services enable ad hoc remote teaching opportunities for teachers and students at schools and universities. Currently, these “cloud-native” services were some of the working things that “kept our heads above water” and substantially changed how we operated after the pandemic. The contributions of this Special Issue show how the cloud-native design philosophy can profoundly influence and evolve system design to a new level of elasticity and agility.
2. Contributions
The papers included in this Special Issue of the Future Internet journal highlight some emerging issues associated with the cloud-native design philosophy of theoretical and practical importance.
The first paper [
1] reports on the design of several XPRIZE multi-million-dollar global competitions to incentivize the development of technological breakthroughs that accelerate humanity toward a better future. This paper is a case study of the requirements, design, and implementation of the XPRIZE Data Collaborative, which is a cloud-based infrastructure that enables the XPRIZE to meet its COVID-19 mission and host future data-centric competitions. The authors examine how a cloud-native application can use an unexpected variety of cloud technologies, ranging from containers, and serverless computing, to even older ones, such as virtual machines. The authors also document the pandemic’s effects on application development in the cloud.
The second paper [
2] deals with high-performance computing (HPC) as a key enabling technology for advancing scientific progress, industrial competitiveness, national and regional security, and the quality of human life. Recent advances in cloud computing and telecommunications have the potential to overcome the historical issues associated with HPC through increased flexibility and efficiency and reduced capital and operational expenditure. This study compromised a survey of HPC decision-makers worldwide. Additionally, a modified Delphi method was conducted with 13 experts to identify and prioritize critical issues in adopting cloud computing for HPC. Results suggest that organizational, data privacy, security, and human factors significantly influence cloud computing adoption decisions for HPC.
The third paper [
3] focused on cloud-native network design, transforming communication networks to a versatile platform for converged network-cloud/edge service provisioning. Intelligent and autonomous management is one of the most challenging issues in cloud-native future networks. This paper provides a big picture of the recent developments of architectural frameworks for intelligent and autonomous management for future networks. The paper surveys the latest progress in the standardization of network management architectures and analyzes how cloud-native network design may facilitate architecture development for addressing management challenges. Open issues related to intelligent and autonomous management in cloud-native future networks are also discussed to identify some possible directions for future research and development.
The last paper [
4] reports on a fascinating and often overlooked effect of the COVID-19 pandemic. From the beginning, the COVID-19 pandemic created the largest distributed volunteer supercomputer on earth. Sadly, the largest supercomputer had significant idle times in the first phase of the COVID-19 pandemic. This paper reviews the current state of volunteer and cloud computing to analyze what both domains could learn from each other. It turns out that the disclosed resource-sharing shortcomings of volunteer computing could be addressed by technologies that have been invented, optimized, and adapted for entirely different purposes by cloud-native companies, such as Uber, Airbnb, Google, or Facebook. Promising technologies might be containers, serverless architectures, image registries, and distributed service registries, and all have one thing in common: they already exist and are all tried and tested in large web-scale deployments.