PALO ALTO, Calif. -- Carrier Network Virtualization -- As communications service providers move more virtualized functions to the cloud and become more focused on applications, containers are expected to play a leading role.
The impact of containers and the challenges for using them were discussed during a panel at last week's Carrier Network Virtualization conference in Palo Alto, Calif.
Service providers, such as AT&T Inc. (NYSE: T) and Verizon Communications Inc. (NYSE: VZ), are looking at containers to speed up their virtualization efforts by deploying them instead of virtual machines. Currently, virtual functions are implemented on a single server with each function supported by a virtual machine (VM). The server has its own operating system (OS), and each VM comes with a separate OS. Layered between the VM and server is another software layer, which is the hypervisor.
While this sort of virtualization achieves the task of changing hardware-based functions into software-based functions, it adds complexity by creating additional layers. All of those software layers take up a large chunk of the processing power, and diminish the benefits of virtualization.
Containers, which are mainly Docker at this point, are dedicated software packages that can be deployed where a service provider, or end user customer, needs them. Container software includes runtime, system tools, and libraries, but not operating systems. Containers, when used in conjunction with a DevOps approach, can speed up the testing and deployment times of applications.
Microservices enable operators to connect resources as they are needed by deploying multiple lightweight containers, which provides a more scalable, flexible service platform.
"They're (containers) smaller than virtual machines but that’s not why they were invented," said SDxCentral Managing Editor Craig Masumoto, who was the panel's moderator. "The point of containers is portability. Because you don't have that operating system, you can take a container and move it to a different machine that might be running a different version of Linux or Unix and it should still run. That's a big deal."
Matsumoto asked the panelists how containers changed things, or whether they changed anything at all, for NFV.
"I think this very much ties into this idea that you can push functionality into different places in the network," said Shai Tsur, software ecosystem program manager, ARM Holdings plc (Nasdaq: ARMHY; London: ARM). "I think it goes hand in hand with this idea that as you can devolve the VNFs into microservices, it also make sense to containerize those microservices for ultimate portability. We're involved in some work with OPNFV around this idea of how you containerize things at the edge, containerized functionality at the edge, because that's going to be critical in 5G."
When asked how tough it was to adapt to a containerized world, Tsur said that was more of a question for carriers, but containers add another layer of intricacy when it comes to customers adopting VNFs.
"You have to develop more of a DevOps type culture in order to fully take advantage of the potential technologies," he said. Matsumoto said he had the impression that the technology itself wasn't very complicated, but it did require a different mindset.
Walter Haeffner, distinguished engineer, Vodafone Germany was the lone service provider representative on the panel. Haeffner said Vodafone was looking at a cloud-native environment, which was "pretty much container based."
"One of the requirements of cloud native is that the microserivces are stateless, but networks have to keep state," he said. "We have to think about how to manage this and how to implement this in the software. If it is not solved we cannot apply it."
Matsumoto said there were efforts under way to support stateless containers.
Microservices are able to scale faster when they are stateless but a service that relies on the state would need a separate, dedicated container with different attributes.
Douglas Ranalli, founder and chief strategy officer for NetNumber Inc. , said one of the challenge he sees with containers is certification of his company's software in each container environment. NetNumber delivers an operating system along with its application, so it has control over the testing and security in the package it delivers.
"We have total control over what's in that operating system," he said. "In a container, we're expected to run on top of whatever operating system package that the particular customer selected. The good news is in general that works. But in specific, everyone wants to know exactly what performance they are going to get and the security compliance. We can't verify that until we know what you're actually going to run on that so it creates a verification issue."
Aman Sehgal, global vice president of sales and general manager at Virtual Gateway Labs said his company added containers to its premise device with VMs in the second half of this year.
"The main reason was the performance," he said. "Stripping away the OS definitely helped but at the same time you run into an issue where what type of applications are you now putting on top of the third-party OS?"
Ranalli said assurance for a container-based solution was a lot different than assurance for a solution that was based on VMs. He said NetNumber relied on operating systems to provide performance metrics and data collection. When there's one operating system per instance, it creates a clean association of the virtual resource workload when compared to the physical resource workload, he said.
"When you move into the container world you now have this shared component of the operating system, and being able to understand what each container's influence is on that shared resource is a lot more challenging," Ranalli said.
— Mike Robuck, Editor, Telco Transformation