What if your brain and your heart were in two different bodies? A gruesome, Shelleyesque proposition, to be sure, but Doug Nassaur uses this visual to point out that greater efficiency can be achieved by way of virtually isolating and moving functions to parts of a network where they are in greater demand, and that virtual containers, aided by microservices, are an ideal way of accomplishing this.
Indeed, AT&T CTO Andre Fuetsch famously announced in August 2015, "We have an ambitious goal of virtualizing 75 percent of our network by 2020. Containers and microservices are key to reaching that goal."
Thus, while he might not make for an ideal surgeon, Telco Transformation reached out to Nassaur, a lead systems architect for AT&T, to get more insights into the telco's approach to containerization.
In Part 1 of this Q&A series, Nassaur talked extensively about AT&T's open container format endeavors and philosophies. (See AT&T's Nassaur: Keep an Open Mind on Containers.)
Now, in Part 2, he gets into meatier topics of the actual import of containers to both AT&T itself and the network-operation industry as a whole -- while offering some tips along the way for network operators interested in deploying containers.
Telco Transformation: From the telco and operator perspective, and assuming virtualization is a given, why containers?
Doug Nassaur: It's a great question, and, unfortunately, it's one that isn't asked, or it's misunderstood. There are a lot of folks that focus on containers as something brand new, and obviously those of us that have been around for a while know otherwise. So asking the question, "Why are we talking about containers?" really needs to be asked.
Why are we talking about containers now? Since a lot of the folks involved in the conversation come from the virtualization space, they view containers as the next incarnation of the virtual machine, so they look at it as a resource allocation.
The reality is that while that stuff is important, the real reason that we're talking about containerization as a verb is that we really need to take business and technical functionality that we implement in software and be able to deploy it in an atomic manner, and be able to scale it independently of other functions and of infrastructure that may happen to be powering it.
TT: Last time, in Part 1, you mentioned that AT&T "knows what questions to be asking" about containers. (See AT&T's Nassaur: Keep an Open Mind on Containers.) So if this last question was an important one to ask, what other questions should we be asking -- and is AT&T asking -- about containerization?
DN: If you go back to this simple rule-of-three outline:
- "How do we define software?"
- "How do we distribute it for execution?" and
- "How do we execute it?"
Then the questions you have to ask yourself are:
- "Where do I want to distribute it?"
- Do I want to be solving this case for complex multi-cloud clouds-to-things continuums?
- Do I want to be able to define a interaction between a group of stakeholders that execute pieces of business and technical functionality from producers/providers/consumers?
- When I define that, do I want the software distribution to be smart enough to know [the use case and compatibility requirements for] particular enablers?
- Do you want to solve this problem at the transactional level, looking from the business or technical functionality down view, or are you going to do what is predominantly done today -- look at it from an infrastructure up view?
And the other one that comes to mind is: "Do you want to find Mr. Right, or do you want to find Mr. Right Now?" If the perfect Mr. Right is a 10, and Kubernetes develops a 4, is it good absolutely? Is it Mr. Right? Absolutely not, because there are six other things that Mr. Right needs to do that Mr. Right Now doesn't do, and you have to go out and piece together -- from other projects or products -- Mr. Right. You want to say, "Hey, you know what? We're gonna throw in behind a set of standards that are not product-specific or project-specific; they're role- and function- specific."
TT: To what extent are containers important to operator environments today?
DN: The way we like to think about it, and the way I like to lay it out in an analogy, is that traditionally we've looked at a deployment model where we've put the brain, the heart, the lungs and the other vital organs always in the same body. So all the organs performed as a set, they behaved as a set, they were deployed as a set, they were scaled as a set, and they failed over as a set.
The incarnation of the next generation of containerization and microservices will allow us to take the brain and deploy that independently of the rest of the vital organs and not have to have it reside within the same body structure. So the way to do that -- the way to elementally insulate and isolate the software that implements those pieces of functionality -- is to containerize them. In order to do that, you also have to have a container support structure and you have to pivot the way you look at software, the way you define it, the way you distribute it, and the way you execute it.
Want to know more about the companies, people and organizations driving developments in the virtualization sector? Check out Virtuapedia, the most comprehensive online resource covering the virtualization industry.
What is the role of containers in current and upcoming AT&T projects?
DN: Open container formatting is the first pathway. In that scenario, the architecture strategy remains the same. The packaging/distribution/execution strategy differs, and there's benefit there, but it's not the home run. There's still meat left on the bone.
So the second pathway is the microservice pathway. In that pathway, you not only decompose the application into its functional elements, but you also re-factor the application logic so that application component becomes part of a bigger ecosystem. I can now deploy the brain in a different place as I did the heart and the lungs, but I can still make it look like it's still all in the same body, and I got the benefit of relocating fundamental elements either closer to demand -- closer to operational elements that are important. That microservice orientation is very important and is probably the biggest enabler in being able to scale fundamental business and technical pieces independently from infrastructure.
So those two pathways are what we're working on now, and we've got projects that are implementing them. This isn't about putting Lego together; it's about playing Tetris. There's an element of time because these building blocks are continuing to evolve, and what we think is very important from an architecture and strategy perspective is to know where to place our investment, to know whether to bank on a particular product or project or to get behind an effort to standardize an interface agreement in a layer of abstraction that will allow innovation to happen below that abstraction point. That will allow us to make intelligent investment where the return will be tremendous, and it doesn't lock us into a particular project, product, stack, or technology.
— Joe Stanganelli, Contributing Writer Telco Transformation