This section presents the configuration setup of the testbed. After that, we analyze and compare the performance of the three MANO frameworks, namely OSM, SONATA and Cloudify, discussing their advantages and disadvantages. The Key Performance Indicators (KPIs) that have been used for comparison are based on the analysis presented in ref. [27
], where the KPIs are categorized in functional ones, describing non-run-time characteristics of a MANO framework, such as resource footprint and number of supported VIMs, and operational ones, dealing with run-time operations, such as the time to on-board an NS package or the time to scale up a VNF. The focus in our work is mainly around the comparison of the operational KPIs of the three MANOs, although a table with the comparison of functional KPIs is also provided.
4.1. Testbed Setup
For the sake of fairness among the MANO frameworks comparison, we created a sandbox test environment, consisting of three physical servers (Table 2
), very similar to a real design of >an NFV production environment. One server is dedicated to the installation of the MANO framework, while the other two are used to host the NFVI, which has been selected to be OpenStack, Ocata version.
Regarding the physical networking infrastructure, the MANO framework is connected to the VIM via an Aruba 2930F 1 GBbps ToR switch (Figure 3
). At this point, it should be noted that the MANO frameworks could be also installed and run in VMs. However, we did not opt for this choice, because of the potential impact of the hypervisor on the performance of the MANO frameworks. Therefore, all frameworks have been tested on the same server, following a three-phase scheme: (a) installation of the Operating System (Table 3
), (b) installation of the MANO framework and (c) execution of the performance tests.
All the MANOs were tested under the same conditions, performing the same actions, using the same NFVI and the same physical network infrastructure. However, in order to further ensure the same test conditions, throughout the execution of the tests, the selected NSs consisted of the same VNFs that implements a simple cirros OS (CirrOS 0.4.0) for all cases. Furthermore, all the performance tests were implemented by automated execution scripts based on the provided interfaces (CLI, RESTful APIs). In order to also test the statistical behavior of the MANO frameworks, each test was executed 50 times, and we collected the min, max and average execution times. In order to facilitate this process and to recreate the same stress conditions to all MANOs, a Jenkins server was used. For our analysis we used SONATA release 5.0, OSM release 6.0 and Cloudify version 5.0.
4.2. Comparison of Functional KPIs
In this section, we discuss some of the main functional characteristics of the MANO frameworks that have arisen during the installation and testing phases.
With respect to the resource footprint, all three platforms require the same low amount of resources (Table 4
), which makes them lightweight and deployable even to a single server. Also, it is worth mentioning that the computational and storage resources of the sandbox environment, which hosts the MANO installations, are more than enough to fulfill their requirements. Regarding the installation times, we observe that for all frameworks are approximately the same (Table 4
). In general, it is highly depending on the computing power and the Internet bandwidth connection of the host server. Taking into account that in our case we used the same physical server we can proceed with a comparison between them. Even though in all cases scripts for automated installation are provided, SONATA and OSM scripts are fully automated in contrast to Cloudify in which some manual steps are needed. On the other hand, there are some remarkable differences in the supported VIM types like the AWS in OSM and Kubernetes in SONATA. Also, Cloudify with the usage of the appropriate plugin can support the definition of Kubernetes resources in the blueprints [28
]. It is worth mentioning that OSM and SONATA follow the micro-services concept, implementing all their components as stateless services running as containers. This approach provides very fast startup times and sets the bases to support High Availability (HA) MANO deployments in the future. These features are very important in production environments because NFV vendors need to be sure that in case of any malfunction the MANO will react automatically and quickly fix the problem without any effect in the QoS. Finally, a common feature in all frameworks is the CLI management tool, which provides easy management access and programmability.
4.3. Testing Scenarios for Operational KPIs
In this section, we compare the operational characteristics of the MANO frameworks under test. In particular, we base our comparison in the following four scenarios: First, we measure the package on-boarding time, which is the time needed for an NS package, consisting of the VNF descriptors, the NS descriptor, and the VNF Forwarding Graph (VNFFG) to be on-boarded on the platform. Second, we measure the NS installation time, which is the time required for the same NS package to be deployed in the OpenStack environment. Third, we measure the time needed for the scale-out operation and finally we measure the time required for the scale in execution. It is highlighted that all the above tests have been automatically executed 50 times, using NSs of different size consisting of one, two and three VNF instances. The gathered results (minimum/maximum/average values) are presented in the following figures.
With respect to the on-boarding time, the three frameworks show similar performance. The required average time for all the scenarios was close to 2 s for OSM and Cloudify, while for SONATA it was around 1 s. Each VNF consists of one Virtual Deployment Unit (VDU) and also their images have been pre-loaded in the OpenStack Glance, so the measurements represent the time needed for the management actions of each MANO (e.g., user authentication, NSD/VNFD versioning, etc.). This behavior was expected because all the MANOs under test use similar technologies (e.g., RESTful APIs) for the management of the NS packages.
The results concerning the NS instantiation time revealed interesting aspects of the frameworks. As instantiation time, we defined the time period from the request until the time that the MANO changes the status of the new service to “READY”. In all MANO frameworks, a new NS is considered to be finished when all VDUs are running (this information is coming from VIM) and the network configuration is also completed. Of course, the fact that the VDUs are in running state does not necessarily mean that the services that they host are also up and running. SONATA provides an SDK tool that can be used for building and validation of new packages so that the developer can be sure that the package is going to be instantiated correctly. Now, as shown in Figure 4
, OSM performed better than the other two in all three deployment scenarios, while SONATA performs better than Cloudify only in case of an NS with 1 VNF. Taking a closer look at the minimum and maximum values, it is evident that maximum values of a certain MANO (e.g., SONATA) may exceed average values of others (e.g., OSM). Another interesting observation is that in all cases the variation between the minimum and the maximum values is pretty large, but for SONATA, in some cases the maximum values are three or more times bigger than of the average value. This means that a small number of NS instantiations took too much time to be completed, which is something that must be further investigated in the future.
Regarding the scaling actions, many MANOs claimed in the past that they were capable of performing such actions as part of the LCM operations [27
] without big success. In the latest versions, all the under-test MANO performed the requested scaling actions without any problems, giving the impression that the frameworks are mature enough to take their places in production environments. At this point, SONATA has a clear advantage by implementing a fully automated LCM mechanism that includes the definition of SLA contracts and policy enforcement through the FSM, based on monitoring metrics and rules. For each NS, the developer can define several policies, the triggering rules and the corresponding actions in the FSM, so that the user of the service can choose which policy fits better to his/her needs and request it during the instantiation phase.
In the scaling out scenario (Figure 5
), Cloudify and SONATA performance are almost identical, while OSM, although performing better for the case of one VNF, linearly increases its required time for the cases of two and three VNFs, ending up with almost double time, compared to the other two platforms, for the case of three VNFs. OSM behavior is noted as a point for future work. If the scaling time is proportional to the number of VNF instances, this will have a negative impact on the LCM actions in case that the NS consists of many VNFs.
Finally, with respect to the scaling-in scenarios, the differences are small. However, OSM performs slightly better also in these scenarios (Figure 6
). A general observation is that the scale-in actions are performed much faster than the scale-out ones in all cases. This is explained because during the scale-in process the MANO first sets the new network configuration, using the WIM controller and, then deletes the VDUs. In contrast, during the scale-out the MANO must first instantiate the new VDUs, then create the new network graph, attach the VDUs to the new network and finally deliver the NS for usage.
As a general conclusion, all MANOs seem to be functional and perform correctly all the requested tasks within a reasonable time period. No special problems were faced in terms of adoption, usage and stability of the provided mechanisms in all the cases. The software releases were stable and facilitated the realization of experiments and the examination of the targeted performance characteristics. Validation of the proper instantiation and execution of the NSs was realized, taking advantage of the available logging and reporting mechanisms. However, it seems that some limitations exist in terms of security, since there are not in place strong guarantees for tackling targeted cyberattacks. The existence of a wide community for the support of both OSM and Cloudify, along with the continuous expansion of the interested parties adopting SONATA, can act as a guarantee for the continuous evolution of the supported orchestration mechanisms and the support of quality assurance processes in the software releases.
Furthermore, based on the operational and functional KPIs measurements it seems that OSM performs better in most cases but there is a linear increment of the scaling out time that reaches the double value of Cloudify and SONATA in three VNFs. Therefore, there is a need for future research work (a) performing scaling tests based on real NSs consisting of many VNFs (more than 10) over NFVIs with more resources, (b) using different types of virtualization technologies (Linux containers, Kubernetes etc.) and (c) comparing SONATA MANO with other solutions offered by key market vendors.