Artificial intelligence and analytics are expected to predict breakdowns of equipment, prevent failures, meet service assurance goals, reduce network operations costs, improve customer experience and more.
In Part 2 of a two-part Q&A, Telco Transformation spoke to Andrew Dugan, chief technology officer for Level 3 Communications Inc. (NYSE: LVLT) about the challenges and opportunities of utilizing artificial intelligence (AI), machine learning and analytics in streamlining the network. (See A Deep Dive Into Intent-Based Network Management With Level 3's CTO for Part 1 of this interview.)
Want to know more about NFV and open source strategies? Join us in Austin at the fourth annual Big Communications Event. There's still time to register and
communications service providers get in free.
Telco Transformation: How much of an impact has analytics made in service assurance, reducing costs and improving customer experience?
Andrew Dugan: I think it's important to differentiate AI and machine learning -- two very broad terms. In our definition, AI emulates the brain in software, while machine learning is more a subset of AI based on curated or guided algorithms that are focused on a particular task, such as making video recommendations in a streaming service, like Netflix, or a network operator using machine learning to predict hardware failures.
The industry has evolved significantly in making these technologies more available. What was once mainly the work of computer science researchers is now readily accessible outside of research facilities; we can download machine-learning libraries at home. When you combine the availability of big data technologies and the accessibility of machine learning, it's becoming quite a powerful capability.
TT: What methods does Level 3 use for data analytics?
AD: Level 3 leverages the Apache data analytics solutions to stream and process data from our network, understand our current environment and identify threats on the public Internet. Much of our analysis and machine learning is done "out of band" so we can validate the models before updating the logic in production. We build provisioning models for each type of service scenario that we deliver. Each service configuration model must be validated against all possible variable inputs to those models before we release it into a production environment. Failure to do so could result in misconfiguration of our network which could impact one individual service or multiple services.
TT: What about threat detection?
AD: For threat detection, our security operations center develops models for malicious traffic on the Internet and those are validated by re-playing our data collection streams through a test environment to verify that they are not creating false positives before being released into production. Given the potential to impact customers with our service assurance use case, we maintain tight control over this logic. Our architecture lends itself to rapid updates, allowing us to quickly introduce new logic once it is validated.
TT: What are the applications of network management that are already possible with the current state of machine learning and what will it take to get to complete automation?
AD: There's a lot to be excited about in terms of what's already possible, and I can point to some examples from Level 3, such as our CDN, our SDN-controlled Ethernet services and our security solutions. First, our CDN server repair automation is leveraging data analytics for self-healing. The evolution of analytics is moving beyond the base-level if/then algorithms to those that are more advanced, learned and predictive.
In 2016, Level 3 achieved a milestone of 80% automatic restoral of CDN server impairments thanks to logic that is refined by several means -- including machine learning. Machine learning allows us to advance our repair algorithms to identify failing servers and refine our detection and repair methods. A second use case is how data analytics can be leveraged over Level 3's SDN-based Ethernet platform. Using our portals, customers can configure their services to increase subscribed bandwidth based on network data collection and associated analytics to trigger usage-based thresholds.
Over the last few years, we have seen a transition in how customers manage their bandwidth. Where they previously did so by manually adjusting subscription levels, we're now seeing more than 80% of dynamic bandwidth changes resulting from analytics-based triggers.
Finally, our threat analytics platform relies heavily on machine learning to continually adjust its algorithms. With the ever-changing landscape of network threats, it is critically important to have a highly dynamic analytics platform to adjust and to learn the patterns that allow for the detection of those threats. We employ a combination of automated pattern detection with human analysis of those new patterns to teach our algorithms the difference between good and bad traffic on the Internet.
— Kishore Jethanandani, Contributing Writer, Telco Transformation