top of page
Rechercher
  • Photo du rédacteurLuc-Yves Pagal Vinette

ONAP-as-a-Service : beautiful challenge or crazy idea ?

Dernière mise à jour : 25 janv. 2020


INTRODUCTION

ONAP, now part of the Linux Foundation, has seen its origins in ECOMP (AT&T) combined with OPEN-O. But, since its first official release Amsterdam, ONAP has generated a lot of interests and expectations in the market were extremely important and these on multiple accounts. Subsequent releases of ONAP, from Amsterdam to Casablanca so far, have generated a general consensus and major market interests.




Indeed, the market fragmentation between proprietary services and the danger of falling for lock-in strategies and the rapid emergence and instability of Open Source innovations were strong demands for a structural solution. THis new approach should be as stable as any proprietary solutions and Open-Source based. In a recent past, the MANO concept from ETSI has been considered from the start as an incomplete piece, by the majority of professionals in our industry. The main factor being that SDN Controller was absent from the first iteration of it.


ETSI MANO approach


In the recent years, the market has seen a plethora of Service Orchestration of multiple natures and bringing different savoir-faire and positioning :

Proprietary solutions : Juniper’s Contrail ; Cisco NSO ; Huawei SDO & AOS ; Ciena Blue Planet ; ADVA Ensemble software suite ; Amdocs NCSO ; ZTE NFVO Cloud Studio and more recently Mavenir CloudRange, F5 SSL Orchestrator, etc...

Open Source MANO players and fully Open Source options : They are mitigating both proprietary solutions to Open Source Innovations Cloudify ; RIFT IO or fully OpenSource such as OpenStack Tacker, OpenBaton, Kubernetes, Docker, Mesosphere and others.


The multiplication of orchestrator solutions has rendered the monetization of virtualized services (VNF/CNF) an extreme challenge due to the native multi-vendors scheme that all players (CSPs and Enterprises) in the industry have been leveraging for a long time. I will leave the question of the whitebox and NFVi solutions aside, which I covered in my past articles. Therefore, ONAP represents the best path of convergence on very multiple levels : Multi-Cloud or VIM / Multi-Vendors based VNFs & CNFs / Perspectives towards Intent-based Orchestration & Automation / Pro-active monitoring with AI, DL and ML, etc..


A recent announcement from Accenture has caught my attention recently where they officially announced officially that they will be launching an ONAP-as-a-Service platform for Service Providers and enterprises. How successful can this be and which service perspectives can we expect from it ?


Let’s explore it, shall we…

Please also visit the great article about ONAP and Open Source enabling the path to 5G written by the unique Alla Goldner here.


ONAP, IS IT REALLY CHANGING THE MARKET ?

In a market in a constant evolution where openness expectations collide with stability promises coming from legacy/proprietary solutions. How does ONAP respond to challenges such as Multi-VNF vendor solutions to respond quickly and efficiently to the need of market differentiation ?


Multi-VNF/CNF Vendors requirements

Currently to monetize virtually-attached services, new generation of service infrastructure leverage many components for SDN/NFV related functions amongst them the Service Orchestration function aka NFVO (Network Functions Virtualization Orchestrator). The market has seen the creation and the availability of countless orchestrator solutions (NFVO) has been released in the market, notably :


Proprietary Vendor NFVO solution : Supports their own set of VNFs and key partners VNFs, they are generally identified as monolithic solutions that are often suspected of lock-in strategies and limit by de-facto the future options of introducing other Open Source VNFs/CNFs and innovative VNFs/CNFs from unknown vendors.


OSM-based NFVO solutions mixing proprietary base and Open-Source innovations: Supports as many VNF solutions as possible but requires significant time and investment to do so. Guarantees significant openness and possibilities to introduce new service concepts rapidly. Driving significant interests and market traction, can often be trapped by an inadequation between the efforts and immediate generated revenues.


The question of VNF onboarding has been a difficult market subject, the design-time of ONAP addresses just that. It shall provide all the required tools to allow the onboarding of VNFs/CNFs, creation of service blueprints but also to guarantee the creation of new policies and their validation. These elements should accelerate significantly the onboarding of new virtualized or containerized services with limited efforts. We can’t forget that VNF onboarding is one of the major obstacle that is justifying VNF vendors to push proprietary solutions with an accelerated access to orchestration/automation.


To simplify the access to orchestration, TOSCA (Topology and Orchestration for Cloud Applications) provides a standard methodology that defines a model-based orchestration. TOSCA was introduced, when a merging architecture was proposed between ECOMP (AT&T) and Open-O as a mean to simplify the onboarding of simple and even complex VNFs. This will be accomplished by using model-based TOSCA describing the VNF topology, interfaces, infrastructure requirements, telemetry and lifecycle events where everything will be defined in a TOSCA template that wouldn’t be limited to a given VIM or cloud infrastructure.



TOSCA is an Open standard, meant to ensure that operators or vendors are not locked-in with a given VNFM or Cloud/VIM product. The TOSCA Open Standard is actually so flexible that it provides possibilities for administrators of an ONAP platform to customize these deployment variants to address any new models, performance for different use cases and targeted applications such as NaaS or even Mobile disaggregated infrastructure. Last but not least, the flexibility of TOSCA allows to modify more readily and more easily an existing Service Chain or to create new Service Chain from an existing model. This combination of elements shall allow a faster integration/onboarding for any applications considered relevant whatever their origin.


Multi-Cloud or Multi-VIM requirements New models of deployments and operations such as disaggregated networks and Edge-based solutions, push for Multi-Clouds/VIMs solutions to address harsh and volatile requirements, the need for customization. How does ONAP respond to this question ?

The Edge solutions have been multiplying lately both in interests and in sophistication due to enhanced capabilities at NFVi and VIM levels but also due to the variables in the requirements that the market is facing. Notably, requirements at DC/CO would be addressed in certain ways but considered differently at the Edge locations (PoP or aggregation places) due to different constraints of power and space availability and services density.


The acceleration of Mobile infrastructure transformation dictates the virtualization/containerization of the RAN. Further down the road, allow the segregation of the BBU and RRU functions across the network without impairing bandwidth requirements, packet transit delay and latency for Mobile backhaul requirements, and, this to ensure a future to Cloud-RAN. In a similar fashion, the need for cable operators to evolve naturally from D-CCAP (Distributed CCAP) to vCCAP.

In the same bracket, the requirements for SD-WAN that could either be a complete overlay service infrastructure supported at the customer premises or a distributed overlay service infrastructure between several places in the service infrastructure such as Customer Premises, Edge locations to facilitate the stitching capabilities with legacy services such as MPLS and Carrier-Ethernet/EVPN using underlay BGP-peering or type-10A NNI with back-to-back VRF over VLAN.


As we can witness the convergence for virtualization and containerization is a contagious requirement. However, end customers or CSP/MNO have different service requirements, mainly due to the nature of the applications or services to be supported or leveraged. They differ greatly and induce a set of very volatile set of requirements such as real-time sensitive applications, Artificial Intelligence / Machine Learning and Deep Learning but also more typical connectivity-based functions such as virtual Border routers, virtual MPLS, vSBC and others...

The market has developed multiple ways and possibilities to address services virtualization/containerization where depending on the market intentions different possibilities are now available to enterprises/service providers/startups to leverage NFVI (Network Functions Virtualization infrastructure) / VIM (Virtualized Infrastructure Manager) aka Cloud Service Software, which can comes with different flavors.


DataCenter / Central Office oriented NFVi & VIM:

RedHat (OpenStack) / Wind River Titanium Cloud Core (OpenStack) / Mirantis (OpenStack) / Canonical (OpenStack) / ENEA (OpenStack / ADVA Ensemble Conductor (OpenStack) / Sardina (OpenStack) / VIO VMWARE Integrated OpenStack (OpenStack) / Lenovo ThinkCloud (OpenStack)/ ZTE TECS (OpenStack) / Huawei FusionSphere (OpenStack) / VMWARE ESXi (VMWARE)


Edge & Distributed Networks oriented NFVi & VIM :

Red Hat OpenShift (Kubernetes & Docker) / WindRiver Titanium Cloud Edge & Edge SX (Low-Footprint OpenStack & Containers in VMs) / ENEA NFV Access (Lightweight NFVi for VNFs & CNFs) / ADVA Ensemble Connector (Low-Footprint OpenStack).


Customer Premises / Services domain demarcation NFVi & VIM:

ENEA NFV Access & uCPE Manager (Lightweight NFVi & VIM for VNFs & CNFs) / Wind River Titanium Cloud Edge SX (Lightweight NFVi) / ADVA Ensemble Connector (Lightweight NFVi and VIM).


Therefore, to accomodate the need to for multi-cloud perspectives and to embrace Edge Service infrastructure services ONAP supports the Multi-VIM component. Indeed, facing several service requirements means that no a given NFVi solutions can capture all requirements at once. Supporting various versions of Cloud Service Software (NFVi & VIM) bring several challenges especially in disaggregated networks, notably :


* Hundreds of small scale data centers could be accommodating the distribution of applications/services at PoP (Point of Presence).


* Dynamic changes during the NFVi/VIM lifecycle that evolve differently from one to the other.


* Automating the discovery/representation of Infrastructure resources especially for Edge and Disaggregated Network concepts.


* Automating both the onboarding of cloud infrastructures and related service use cases leveraging TOSCA capabilities.


* Aggregates FCAPS (Fault, Configuration, Accounting, Performance & Security) data at near real-time capabilities to maintain an accurate view and appreciation of the complete service infrastructure composition (PNFs, VNFs, CNFs, NFVis & VIMs and the complete physical infrastructure that support all these elements).


To respond to these five important elements, the Multi-VIM component in ONAP is using the run-time environment to collaborate and exchange information with other important components. However, to support multiple Cloud Software options, the Multi-VIM component provide several plugins and SouthBound APIs meant to interface with any NFVi/VIM solutions. And, to facilitate both the orchestration of VNFs/CNFs but also to allow the stitching between PNF and virtual services using Service Functions Chaining.

MULTI-VIM plugins and APIs.


More importantly, collaboration between the Multi-VIM and other important components of the run-time environment will allow to derive multiple benefits for users. The collaboration with the A&AI will be key to inform about the availability of NFVis and VIMs as well as keeping audit capabilities and historical data of the physical and applications inventory available. At the same time, the DCAE component collaboration will be critical to collect data information about the health (performance, usage and configuration information) of the managed infrastructure. Naturally, all these key information will be crucial to keep the Service Orchestration component informed to make sure that smart decisions to be made how and where to orchestrate new services and applications in the below managed infrastructure.


Obviously, the run-time environment is completed by northbound APIs to exchange information towards other key elements : OSS/BSS and Customer portal. This would obviously justify to re-use MEF LSO (MEF 55) set of APIs, notably : Allegro API between Service Orchestration and the customer portal or Legato API towards the OSS/BSS function. For Southbound operations, MEF LSO also brings the PRESTO API while East & West operations are supported by INTERLUDE API.




Legacy services support in ONAP


Since the Beijing release, ONAP and its run-time environment have introduced the possibility to support Physical Network Functions (PNFs). Basically, the main targeted actions would be to reboot remote physical equipments, migrate traffic using automatic legacy service tunnels (IPv4/IPv6/MPLS/PWE3, etc..) and allow also the creation/change and removal of service tunnels, Cloud-Agnostic capabilities when whiteboxes are used to support VNFs or CNFs and naturally to take advantage of Yang/Netconf models through SDN and ODL-related capacities.

The so precious hardware and software separation that has fueled the SDN/NFV inception has identified another aspect that was growing in importance : the NOS (Network Operating Software). As an ex-CCIE engineer, we generally have been nurtured, in our young years in the industry, by technical and routing environments such as Cisco IOS, Juniper Junos, Huawei VRP and Alcatel-Lucent now Nokia Nuage Networks since 2012.

Recent activities at TIP (Telecom Infra Project) such as the Disaggregated Cell Site Gateway project has highlighted a natural interest in identifying open NOS solutions that could leverage both legacy services (inherent IPv4 & IPv6 functions / PWE3 technologies and related MPLS-TE or TP capabilities) and more recent innovations (EVPN / MEF-based Carrier-Ethernet services / Mobile Backhaul capabilities with Sync-E and 1588v2, etc…).

But, Open NOS compound would also require to support by next-generation capacities notably to support both a Yang & Netconf models making possible SDN Controllers to leverage all derived services. Various NOS options exist in the market with the most obvious ones such as : Cisco, Juniper and Huawei which have occupied our industry for a long time but others like Mellanox, Metaswitch, OPX aka Open Switch (Open Source option), IPinFusion seems like great options but still have some distance to recoup against most notable players cited previously.


Innovated players like ADVA Optical Networking and its Ensemble Software suite have decided to couple NOS capabilities directly with their own flavor of OVS (Open vSwitch). Therefore, integrating Carrier-Ethernet features and capabilities directly at the core of the NFVi and VIM (Neutron-OVS plugin) and benefiting of improved performances with DPDK Virtual acceleration. This solution has been implemented within their own version of OpenStack called ADVA Ensemble Connector.


More recently, ADVA has demonstrated even greater innovative flair to address the TIP requirements for Disaggregated Cell Site Gateways. Indeed, ADVA has released its own disaggregated NOS called Ensemble Activator, which will provide countless possibilities to bring packet-based technologies to ODTN, Copper or any relevant places where both legacy and new generation of services need to cohabitate. ONAP provide all components to leverage all these options by leveraging within the run-time environment with :


The Multi-VIM component provide capabilities to support any environment susceptible to virtualize/support the NOS software (PNFs / Virtual Switch, etc…).The SDN-C and APP-C both leverage OpenDaylight as the standard mechanism for SDN capabilities.


It should be noted that in the Beijing release, SDN-C and APP-C component were working on a a pseudo hierarchical mode, both based on ODL, but separated by the view that SDN-C would control Layers 0-3 and APP-C would control layers 4-7.



Now, with the Casablanca release, this notion of pseudo hierarchy of operations between the two has been revoked. SDN-C would control the networking infrastructure aka the NOS and PNF resources. The APP-C would control the lifecycle of VNFs and CNFs and perhaps elasticity capabilities. I even heard that the APP-C would have a role to control somehow the PNFs for reboot and some control levels over modular chassis to control individual components remotely leveraging therefore Rack Scale Design (RSD) and built-in Management RedFish-related APIs.

Naturally, management capabilities would be very important where the Data collection framework (DCAE) would handle PNFs and support SNMP and bulk performance management data files such as Prometheus/ Barometer/Pnda pulling capabilities but also analytics capabilities with Elastic and Salt Stacks.

DOES ONAP NEGATE OPEN SOURCE MANO ?

A surprising question about ONAP is always the question related to other MANO deployments that has been taking place all over the world with Cisco NSO and ACI / Juniper Contrail / ADVA Ensemble software / Ericsson Orchestrator / Huawei AOS or NetMatrix or whatever they call it these days, etc... Are these existing deployment simply going to lose relevance at the benefit of ONAP-based deployments ?


On the contrary, this is why the VFC (Virtual Function Controller) component is actually there for. Most of the time, people gets easily confused what could be the difference between an APP-C component (Casablanca release) and a VFC component. Basically, the APP-C component would manage the lifecycle of VNF/CNFs and PNFs that would be deployed, managed and orchestrated/automated natively within the ONAP framework. Alternatively, the VFC provides an intermediate layer of correspondence for service infrastructure which are deployed, managed and orchestrated and require a viable ETSI MANO Compliant but attached to ONAP NFVO. Where, in such a case ONAP would act dually as a MANO compliant NFVO and as an E2E (End-to-End Orchestrator or Multi-Service Domain Orchestrator). Additionally, The VFC component would either interface with one or multiple VNFMs or could eventually provide a generic VNFM implementation to sustain the lifecycle of existing deployed VNFs/CNFs.

As a result, in such a case of using the VFC component, ONAP would simultaneously provide a converging response for VNFs/CNFs that would reside in an ETSI MANO defined infrastructure with inward ties with other ONAP components notably Multi-VIM / SDN-C & APP-C / DCAE however this will still maintain the relevance of ONAP as an E2E orchestrator leveraging the Service Orchestrator component.


WHAT ARE THE FUTURE POSSIBILITIES WITH ONAP ?

The future possibilities for ONAP are so large that it would several pages to list them all, however, in a perspective of monetizing ONAP-as-a-Service I believe it is important to list or to provide some short perspectives about the things that would ONAP a viable multi-tenant platform with an acceptable level of consistency and stability. Naturally, these subjects can be debated and questioned obviously.


About Automatic Multi-Cloud Orchestration

ONAP is meant to a be a managed orchestration platform that will fully monetize the value brought by virtual services in any of their forms VNFs or CNFs-based. However, with MEC and Disaggregated networks where services and applications will be distributed across the operator/enterprise service infrastructure. It will somehow require more flexible Cloud/VIM implementation that will automate the duplication of the instantiation of a given VNF. Typically, the ONAP service orchestrator could push the instantiation of a vRouter but the request will only be triggered to the DC/CO Cloud-VIM and automatically replicated to any other PoP-based Cloud-VIM implementations down to even customer locations. Wind River Titanium Cloud and Red Hat (OpenStack) implementations has already provided such options and possibilities.


About Close Loop Orchestration/Automation

As depicted just above, new ways to implement and sustain network slicing service implementation, will largely leverage what networks disaggregation has best to offer notably the requirements to distribute not only applications but also the service intelligence (Orchestration/SDN capabilities) alongside the distributed applications. Therefore, it will require a constant and real-time based interaction between the central cloud and the edge cloud. This constant interaction will necessitate that both central and edge orchestration capabilities to operate like an orchestrator federation. This orchestration federation will obviously monitor not only itself but also the applications deployed using FCAPS capabilities to ensure pro-active monitoring and service assurance. Most tools to support the FCAPS capabilities today exist such as standard telemetry with EMS/NMS, PNDA & Barometer through OPNFV / Elastic Stack and Salt Stack and most of these HA options and capabilities have been reused in various SDO programs such as StalingX and Akraino from Linux Foundation.


About Artificial Intelligence

To me, It looks fairly obvious that ONAP is leading us towards Intent-based-Networking and Orchestration. Tools like Prometheus/PNDA/Barometer/ TensorFlow /Elastic Stack and Salt Stack to take advantage of Rack Scale Design and RedFish Management APIs, when supported by proper Yang and Netconf models, will provide the right tools to proactively monitor and Zero-Touch provision at all layers in hardware and software. In hardware, it would include everything from BMC, IPMI, Switches, CPU, memory and others like storage. In software, any applications and functions will be adequate elements to monitor and zero-touch provision from SD-WAN, Security, Service Chaining, vMPLS, NOS capabilities, Traffic Engineering and PNFs even.


Now, to support this massive influx of information and how to digest fluctuating, especially when hardware will differ so much and software behavior will massively differ given the various versions of Cloud-VIM implementations that will support applications. This will require that both AA&I and DCAE components would largely benefit from both Machine Learning & Deep Learning and Service Orchestration to leverage artificial intelligence to ensure smart-decisions as a result of the intents made from the ONAP Multi-tenant portal.


IS MAKING ONAP AN AS-A-SERVICE THE BEST OPTION ?


I totally believe in this initiative with ONAP, I actually see all the elements to make from ONAP a successful multi-tenant based and true Multi-Service Domain Orchestration platform. However, what a challenge !!


We can neglect that ONAP is an Open-Source innovation, which resulted from a proprietary solution as ECOMP and merged with OPEN-O. In only 2018, ONAP has seen so much changes from the Amsterdam, Beijing and lately with the Casablanca release. Yes, the market has largely embraced the potential and the perspectives brought by ONAP. However, the complete Open Source construct is in constant evolution, ONAP is directly concerned as the rest of its surrounding environment.


Let's just remind ourselves what ONAP run-time main elements are :

* Multi-VIM, which support a set of plugins for OpenStack, VMWARE, Kubernetes & Docker that are all evolving at their own pace and not in alignment.


* SDN-C & APP-C, which are based on Opendaylight (ODL), currently Fluorine release and evolving...


* DCAE & AA&I are also depending on lower layers for management and analytics purpose, therefore, APIs are subject of evolution to match improved lower layer capabilities...

Etc…


* VFC, will be the intermediate layer linking directly ETSI MANO deployments to ONAP benefits therefore would maintain either a transitional layer or a hierarchical layer for disaggregated networks. Shall follow ETSI NFV ISG.


Without listing everything in ONAP, it looks rather intimidating to consider that a moving environment with VNFs, CNFs, ESXis, OpenStack releases, Kubernetes releases, ODL releases, relevant APIs structure needs to be maintained in a rapid, business and industrial fashioned way.. This would require a stable enough structure to maintain behavior consistency and adaptability to ensure a constant value for money and a recurrent virtual services monetization. But, the most important of all is to maintain some relevance to all the very different customer users of this SaaS platform.


CAN ONAP BE SUSTAINED WITHOUT A PROPRIETARY TOUCH ?


On a personal view, it looks to me that leveraging open source Innovation is one thing but maintaining consistency and stability from a constantly evolving source code is extremely challenging. Especially, when we consider the Service Orchestration component that will reside at the cross-roads of all ONAP components and certainly the main element of the Service Intelligence.


Not my place or skills to compare source codes (lol), but, players like ADVA Optical Networking Ensemble Orchestration, Cloudify Manager, Juniper Contrail, Ericsson Orchestrator, Amdocs Orchestrator would provide a very solid base of a source code for an ONAP Service Orchestration component. This source code could more easily be maintained with other surrounding components of interests for a stable ONAP-as-a-Service.

Without a proprietary touch, an industrial and multi-tenant oriented ONAP-as-a-Service would have in my view little chance to be maintained in a viable state of operations to provide satisfactory levels of consistency and stability that will be expected.


In such a case, it would simply means that the Service Orchestration component would be based upon a proprietary based but all surrounding components in ONAP would then be directly derived from Open Source and sustain an interesting level of openness while keeping in mind that stability and consistency are the most important elements for an as-a-service solution.


CONCLUSION

In my perspectives, Accenture as recently launched a mini-bomb by mentioning their intent, to fully leverage an open source initiative of the magnitude of ONAP as a Software-as-a-Service platform. The potential is enormous for enterprises, startups which drive AI/ML/DL applications, education, small service providers, MVNOs, Virtual Service Providers, etc.. I personally salute the enthusiasm but also the innovative path that they have embarked themselves in.


Accenture joined the Linux Foundation in 2017 and two years later they have announced this SaaS based upon ONAP. It proves the mid and long-term analysis that has been produced before reaching this conclusion. Such a decision-making process must have been largely influenced by the massive market consensus that ONAP has mostly generated but not only. As I mentioned in above paragraphs, ONAP has tremendously evolved since its initial release (Amsterdam) and its current release (Casablanca).


Such an engagement, looks to me as the most interesting challenge I have seen in a long time, therefore congrats to Accenture team. Having said that, as I covered during these 9 pages the challenges that bring ONAP to the table. It is also important to appreciate that Linux Foundation and ONAP are not isolated SDO-led project and they are part of a much larger community.


Linux Foundation has already started some liaison and converge with other very influential SDOs such as the MEF and ETSI. Nonetheless, as I mentioned in a previous article, TIP (Telecom Infra Project is another steadily growing SDO force that will drive significant innovations in Network slicing, Disaggregation of Networks, Open RAN, etc..

To drive towards a successful outcomes and responding to market requirements, Accenture should make its utmost efforts to ensure stability, consistency, leverage important ongoing community initiatives such as the ones from ETSI, MEF 55 aka Lifecycle Service Orchestration and all APIs already defined or in current definition, TIP (Telecom Infra Project) without forgetting other LF projects.


Written by Luc-Yves Pagal Vinette

519 vues0 commentaire
bottom of page