|
Contributors | Messages | Polls | Resources |
|
AT&T Seeks to Lead with ECOMPLast month, AT&T introduced Enhanced Control, Orchestration, Management & Policy (ECOMP.) The complex cloud management platform is a means of rapidly and efficiently "onboarding" -- integrating into the system and rolling out -- new services from AT&T Inc. (NYSE: T) and outsiders in a manner that is consistent, automated and seamless. (See AT&T Shares ECOMP Vision, Might Share Software.) Telco Transformation recently spoke about the project with Chris Rice, AT&T Labs' vice president of advanced technologies and architecture. Rice oversees research and advanced development in networking and IP network management, network virtualization, big data and other areas. Telco Transformation: What is ECOMP? Chris Rice: What we really are trying to do with ECOMP is handle an area pretty high in the stack for the network cloud we are building. At the base layer there is the OS of the systems that are being used. On top of that there typically is a hypervisor layer. On top of that is the cloud layer, which uses OpenStack or something like that, and on top of all that is the VNF -- virtual network function -- automation layer. That's where ECOMP lives. TT: Please describe it a bit more. CR: Right now there are eight major components within ECOMP. There is orchestration of the virtual machine for compute, networking, storage, management and measurement. We do controllers for network and application that configure the network plan as well as monitor applications. We have a data collection and an analytic engine to monitor and compute KPIs and policy engines to allow us to embed the intelligence we've created over the years. All the data is in the cloud. The VNFs are in a geo-redundant database that allows us to track inventory of what is active and available to use. There is a service design and creation environment that allows us to build and store infrastructure and service resource elements that allow us to build things going forward. TT: Where is ECOMP now? CR: Right now it is an AT&T internal effort. It has about 8.5 million lines of code. We do regular releases. We have 5.7% of our internal network on NFV/SDN. We are using ECOMP to do that. Our goal is to get to 30% by the end of this year. For us to do this more broadly -- to open it up to others -- we want to understand that there is enough interest to create a community that can be the rallying effort to bring homogeneity and standardization to the industry. That's one of reasons we are looking for feedback. To do it right you have to build community. You probably need a third party to be the integrator for the industry. That's not a role AT&T would have. We are in early stages. TT: Why did you architect ECOMP in this way? CR: There is a reason there are eight and not three or 27 elements. We tried to come up with what I call a necessary and sufficient number of elements so that each has a very specific role and function that we felt we -- and, quite frankly, what other service providers and cloud providers -- would need to build a robust set of services. Whether a Tier 1, 2 or 3, we believe you need to do service design and creation, you need to keep track of inventory, to handle policy, to do data collection and computation of KPIs and you need to control applications in a network and you need to be able to orchestrate things. That's how we came up with the eight. TT: Where does this fit with the evolution of the cloud, software-defined networking (SDN) and network functions virtualization (NFV)? CR: Some things are the same and some things different. What I would say is that the cloud has grown up. If you take a look at the cloud and the type of workloads it could do early on, there obviously has been growth. In network clouds you still have some of the capabilities of the standard server workloads [but] workloads in the network clouds are really, really strongly I/O-focused. One of the differences in running a network cloud versus a standard cloud is that you are doing very heavy I/O-like functions because you are providing networking capabilities. You are doing something that is different and somewhat unique. But you can still apply cloud principles to them. TT: What other differences are there? CR: You need lower latency, higher throughput and more real-time capabilities than are typical in clouds today. I take a look at the evolution in the CPUs in standard servers as an example. Over the last ten years -- a little less -- the packet processing performance of standard server class CPUs have improved by about 100x. One of the reasons that happened was that there was a recognition by the folks that build those CPUs that they would have to improve dramatically if they were to take on these different types of workloads; network workloads, routing workloads, firewall workloads. These are high throughput, high packet processing type of workloads. They would have to improve the way they handle packets. And they have, and you will see more evolution. Now there also is no shortage of what I call "The Opens" -- OpenStack , OpenDaylight , open network operating systems, Open Compute Project -- there are different things out there that are the software side of this. TT: Why is AT&T doing this itself? CR: We wanted to quickly onboard and set up and run virtual network functions and to do it at scale. We didn't want to do something unique for each VNF or each VNF vendor. As we looked at other options they probably were partial solutions. We needed a more holistic solution that handled the orchestration, control and management and policy. There are strong open source efforts but they weren't holistic at the VNF automation layer. It was important for us to lead and make sure we were defining and making it happen, as opposed to waiting for it to happen. The traffic on our mobile network since 2007 has increased something like 150,000%. Things like that have driven us to ask, "How do we keep up with that demand and do so flexibly, quickly and reliably?" That's why it was important to jump in and drive the industry to make it happen. We operate at that kind of scale and having people understand scale and be able to build to scale, especially in new areas, is sometimes difficult. We thought it was important to do some of these things ourselves. TT: How does this affect AT&T's resiliency? CR: If you look at AT&T's track record we always have been really resilient. It is a matter of how to get the resiliency. If you look before resiliency was built at the atomic level ... we'd put a specific system in specific racks. They might have been redundant. The plant is powered in a redundant way and we have redundant sites. It is service-specific or solution-specific and very atomic. With cloud you can get that same kind of reliability and resiliency but in a more distributed fashion. The type of cloud and elements you put in place are common and can be reused across different solutions. You can put in reliability at different layers. You can put some in the plant layer, some in the automation software layer or in the VNF layer. Or in a different combination. You can do active-active, active-standby or truly distributed kind of systems to get whatever reliability you need. It gives us more flexible options to meet our already demanding standards we had in that area. TT: What will AT&T bring to the industry with the project? CR: We are looking at an industry that is nascent. Nascent industries have certain proprieties. One of those properties is a need to bring order, if you will. I hate to use the word "standards" -- but I will. I hope to be able to do is do it more as a community rather than as a one-off as service providers. TT: Is the industry showing great interest? CR: There is a lot of interest in this area. I fairly regularly meet with new hires. For us, a new hire is usually a fresh Ph.D. from a top school. I talk to them about the work they did and when they started and where they are now. I see more and more software skills, more and more interest in software-defined networks, and what software can do and how it can change networking. The reality is that we are in an era, if you will. It is like mobility was an era, the conversion from circuit switched to packet was an era. This is another era that you know you are a part of.
— Carl Weinschenk, Contributing Writer |
In part two of this Q&A, the carrier's group head of network virtualization, SDN and NFV calls on vendors to move faster and lead the cloudification charge.
It's time to focus on cloudification instead, Fran Heeran, the group head of Network Virtualization, SDN and NFV at Vodafone, says.
5G must coexist with LTE, 3G and a host of technologies that will ride on top of it, says Arnaud Vamparys, Orange Network Labs' senior vice president for radio networks.
The OpenStack Foundation's Ildiko Vancsa suggests that 5G readiness means never abandoning telco applications and infrastructures once they're 'cloudy enough.'
IDC's John Delaney talks about how telecom CIOs are addressing the relationship between 5G, automation and virtualization, while cautioning that they might be forgetting the basics.
On-the-Air Thursdays Digital Audio
ARCHIVED | December 7, 2017, 12pm EST
Orange has been one of the leading proponents of SDN and NFV. In this Telco Transformation radio show, Orange's John Isch provides some perspective on his company's NFV/SDN journey.
Special Huawei Video
Huawei Network Transformation Seminar The adoption of virtualization technology and cloud architectures by telecom network operators is now well underway but there is still a long way to go before the transition to an era of Network Functions Cloudification (NFC) is complete. |
|
|
||
Telco Transformation
About Us
Contact Us
Help
Register
Twitter
Facebook
RSS
Copyright © 2024 Light Reading, part of Informa Tech, a division of Informa PLC. All rights reserved. Privacy Policy | Cookie Policy | Terms of Use in partnership with
|