The Linux Foundation Projects
Skip to main content

Join us at ONE Summit 2024 in San Jose and explore the latest in networking and edge innovation | REGISTER

ETSI NFV: a decade of transformation

ETSI NFV: A decade of transformation

by Ranny Haiby

Over 10 years ago, the telecommunications industry was beginning an introspective transformational analysis of its systems and business models. At the same time, cloud computing was coming into its own and challenged dominant conventional virtualization technologies (CAPEX-based) by introducing demand-based consumption systems (OPEX-based) that resulted in the popularization and adoption of Infrastructure-as-a-Service (IaaS). A handful of telecommunications companies and ETSI published the ETSI NFV white paper, kicking off a decade of transformation.

How Time Flies When You’re Having Fun

The ETSI white paper led Telecommunications companies to consider how their infrastructure and consumer services could be delivered. Then, in  2012, several major global telecommunications companies published a white paper published under the auspices of ETSI entitled, “Network Functions Virtualisation: An Introduction, Benefits, Enablers, Challenges & Call for Action” that outlined how virtualization could benefit their collective businesses by ultimately shifting away from single vendor solutions – often termed monolithic or black box – and towards a leaner, more transparent infrastructure stack based on commercial-off-the-shelf and open source solutions. 

This inspired and accelerated the adoption of network functions virtualization (NFV) as a path forward. As such, NFV was embraced by the cloud and virtualization community as well as by almost all vendors serving the telecommunications industry. Another technology trend that started gaining momentum around the same time was the disaggregation of networking devices that separated the packet switching/forwarding from the network control plane, enabling what became to be known as Software Defined Networks (SDN). One of the earliest efforts to help enable NFV/SDN was the OpenDaylight project (ODL; a project under The Linux Foundation Networking umbrella). ODL was a trailblazer and ended up being tightly intertwined with NFV in its early days. Today, ODL still serves as a SDN for many vendors who include it as part of their commercial offerings or support it as a standalone SDN solution.

Today, almost all global telecommunications companies make use of NFV and SDN in some form and are actively planning and/or deploying the next iteration of change with next generation technologies such as Cloud Native Network Functions and machine learning/artificial intelligence.

Telecommunications transformation and innovation

When working on innovations in open source software for networks it is easy to lose track of time. There are always huge tasks at hand, challenges from inside our communities and the outside world, and exciting new technologies we want to adopt. That’s why when the ETSI NFV workgroup published their “Evolving NFV towards the next decade” white paper, it made me stop and think about what we have done in the last decade, and where are we going in the next.

First, let’s address the elephant in the room. The term “NFV” may have lost its appeal with some audiences, and I get that the word “virtualization” triggers references to hardware abstraction technologies that were superseded by more popular ones. But ignoring that for a moment, let’s use the term NFV in its broader meaning: the transition from monolithic hardware/software appliances in networks, to a disaggregated model where software may run on general purpose hardware, with some hardware acceleration where needed.

Open source foundations like The Linux Foundation Networking have been working side by side with Standards Definition Organizations (SDOs) in the last decade to provide a software implementation of the standards that are widely available and can be used for further research, or become part of commercial product offerings. Starting more than ten years ago with open source projects like ODL for the network control plane, FD.io for the data plane, ONAP for network management and OPNFV (later merged into Anuket) for the telco infrastructure. In parallel, other open source communities were creating projects such as Kubernetes that provide the necessary platform for creating network functions that are not only decoupled from the hardware, but also follow the cloud native tenets that provide scalability and resilience. We recently saw how synergies can be created using these technologies with the Linux Foundation’s Nephio project as one of the more recent examples.

The new ETSI white paper discusses several challenges for the network function transformation that had to be addressed by the NFV workgroup. I believe open source software played a key role in addressing these challenges, and will continue to do so in the next decade. Let’s see how we managed to address these challenges:

Declarative intent-driven network operations: The growing complexity of disaggregated network deployments and fine-grained customization in network development requires generic management solutions. Declarative intent-driven operations, which shift the responsibility to fulfill the desired state to the intent based API producer management function, can simplify NFV network operations and improve efficiency and agility. Many of the management and orchestration open source projects under the Linux Foundation realized that, and managed to provide interfaces that separate the declarative expression of intent from the imperative implementation. ONAP has an entire use case dedicated to implementing intent based network orchestration that uses natural language processing (NLP) to interpret the network operator’s intent and translate it to resource orchestration actions. EMCO is another example of a project that gets placement intent requirements as input, and delivers the optimal edge cloud placement as output.

The rise of containerization and heterogeneous infrastructure: OS virtualization technologies such as containers offer benefits like deployment speed and improved resource utilization. Efforts are being made across the industry to support both container and VM virtualization technologies, as well as newer technologies like unikernels. The Linux Foundation Networking projects were early to identify the need for supporting these hardware abstraction technologies and are now ready to address the needs of network functions regardless of which abstraction technology they use. Projects like FD.IO are deeply integrated with the K8S ecosystem. ONAP has completed its evolution to fully support mixed network services that consist of Physical, Virtualized and Containerized network functions.

Autonomous networking, automation, and unified/sole data source: Autonomous network technology aims to enhance automatic and intelligent network operations, administration, and maintenance (OAM). The industry is adopting automation, AI, machine learning, and other technologies to improve system efficiency and handle fault alarms. Data management plays a crucial role in automation, and there is a trend towards data meshing to ensure accurate, real-time, and reliable data for decision-making. The Linux Foundation ONAP project has always focused on control loop automation for the network, and managing data generated by network functions. In fact, the ONAP VES (VNF Event Streaming) format was adopted by 3GPP SA WG5 as the standard method for performance and fault monitoring. ONAP had a robust infrastructure for moving and storing data from its inception, which keeps evolving to provide richer functionality that is the basis for AI driven network management.

Fragmentation of telco cloud implementations: Telco cloud implementations face challenges due to the fragmentation of technologies and standards. The IT industry and telecom network operators have different approaches to interoperability and ecosystem development. The Linux Foundation Anuket project has been working on addressing this particular issue, and provides two types of deliverables that can be used by anyone designing or building telco infrastructure. The first is a set of specifications on how to build the infrastructure. The second is a set of tools and test plans that can verify that the telco infrastructure can support the needs of the network functions. In recent years we have witnessed this project evolve from a virtual machine based architecture that was prevalent with network operators until just recently, to one based on Kubernetes to address modern Cloud Native network functions. This is a good indication that the project is ready to tackle the ever changing landscape of telco infrastructure technologies.

Business sustainability versus rapid release evolution of open source: While standards focus on architecture design and interoperability, open source projects prioritize code development. This allows open source projects to quickly deliver Proof of Concepts (PoC) that implement use cases specified by the standards. But on the other hand, it may prove challenging to integrate fast changing open source software into production networks. The Linux Foundation Networking has had a good track record over the last decade of working alongside SDOs, making sure the open source project moves fast, but not too fast to become incompatible with the standards. In some cases the open source projects adopt and implement mature standard specifications. In other cases, the open source project comes up with early implementations that serve as an inspiration to SDOs, like ETSI, and are later rapidly replaced with an actual implementation that follows the standard.

Hyper-distributed & fully-interconnected edge deployments: As service provider’s network deployments extend their coverage, providing more connectivity and services at the edge has become increasingly important, network hardware/software decoupling is rapidly extending beyond core network use cases to the edge and access networks. The Linux Foundation Networking and The Linux Foundation Edge communities have been working closely together to create the necessary synergies between generic edge computing technologies and the unique needs of network functions (such as low latency, high throughput, and reliability). Blueprints like the Public Cloud Edge Interface (PCEI) under the LF Edge Akraino project address this unique intersection of Network and Edge computing. The blueprint won a recent ETSI NFV hackathon, proving once again the strong collaboration between open source communities and SDOs.

What’s next?

Looking towards the next decade of the evolution of network functions, I am optimistic about the strengthened collaboration between open source communities and SDOs. It seems like today, there are no more doubts regarding the role that open source software plays in the advancement of communications standards. The next decade will probably continue to see more examples of open source projects developed alongside standards, providing a reference implementation and serving as a feedback loop to the improvement of the standard specifications. 

There are numerous opportunities to contribute and collaborate to these projects and standards. One way is to join The Linux Foundation Networking and select a project that suits your interests. And, of course, be sure to follow The Linux Foundation Networking on Twitter and LinkedIn (LFNetworking LinkedIn and ONE Summit by LFN LinkedIn) and bookmark our YouTube channel to watch all recorded sessions from LF Networking events.

About The Linux Foundation Networking

To learn more about open source networking projects under the LF Networking umbrella, please visit the LF Networking website. Follow us on LinkedIn (LFNetworking LinkedIn and ONE Summit by LFN LinkedIn) and Twitter so you don’t miss out on what’s happening in the world of open source networking. Bookmark our YouTube channel to watch all recorded sessions from LF Networking events.

Author