Securing the virtualized system isn't very different from securing the black-box legacy system, says Phil Ritter, director of network technology planning at Verizon -- insofar as both require the three D's: diligence, decentralization and double-checking your work with third parties.
In the first installment of this Q&A, Ritter spoke about the pros and cons of SDN and NFV-based security. (See Verizon's Ritter on Virtualization as a Security Tool: 'Cautious Optimism'.) Now, in part two of this Q&A -- edited for clarity and length -- Ritter speaks at length about the diligence and vigilance required in building security into virtualized systems.
Telco Transformation: Previously, you spoke extensively about isolation and segregation in the network -- something I've also discussed with your Verizon colleague, vice president of business networks and security solutions Shawn Hakl. (See Verizon's Hakl: SD-WAN Delivers a Multitude of Benefits and Verizon's Hakl: SDN Creates Virtualized World.) Hakl has speculated on the SDN security notion of potentially having a sort of demilitarized zone using specialized packet-optimization algorithms for content distribution. (See Security Takes On Malicious DNA (Files).) Is this something you've thought about doing in your own internal systems? Where do you see the future of using virtualization in this way to allow data to move more freely without being as rigorously scanned because of this virtualized network segmentation?
Phil Ritter: I think Shawn was speaking of customer-facing services that we might offer, and you're asking me to answer it in the context of things we might do for our own purpose, and it remains true that the same things we might do for our customers are interesting for ourselves. In fact, in a very real sense, the most interesting things we might do for our customers would be things that we've already done for ourselves because we're quite good at them and have proven the value.
There are large parts of Verizon's systems that face these very same challenges. Some of our video distribution that we do faces this same challenge where we would want to bring the video content as close to the delivery point as possible so that we don't have to go retrieve it from the source every time it's played, for example. The virtualization functions that we're putting in place, or the software-defined networking systems that support virtualization, actually do make it possible for us to implement that kind of content distribution for our own purposes and then potentially expose it in a way that our customers may use the same thing.
Again, it's a case of needing to be very, very cautious because if this is for a private Internet service that we offer for some of our enterprise customers and doing content distribution on their private network, we just have to be very careful that we put the perimeter controls in place around that to ensure that that doesn't leak onto being exposed to the public Internet.
In the non-virtualized world, the way that we would do that would be by deploying their product on a dedicated appliance -- something that was only used by them. The problem that has is dedicating an appliance for every customer group becomes very expensive very quickly, and the promise of virtualization is the promise of shared assets that we can use to deliver that service more cost-effectively. But again as I said before, every tool can become a weapon -- and once we have that sharing, we have to ensure that we have the policy rules in place to not allow one customer's traffic to leak to either another customer or to the public Internet.
You certainly wouldn't want to have a situation where you accidentally interconnect two systems or two customer groups that are on the same platform…The orchestration and management tools have to be in place, the audit tools have to be in place and the third-party validation tools have to be in place to ensure that our rules work before we jump too aggressively into providing those services on shared assets. So, I mean, we're in this place right now where we have a great deal of promise, but like I said, that same promise gives a much bigger attack footprint, as some people would call it, or opens a set of threat vectors that didn't exist in the appliance world.
TT: What are some common particularized threats that you've seen or heard of in this virtualized appliance world? Or are there any war stories that you can share?
PR: I will say that the biggest threat vector we probably face hasn't changed from the prior environment; the biggest threat vector that we've faced is really just sloppiness or lack of diligence in deploying systems. In times where there have been problems, what is common about all of them is that they weren't sophisticated and they were things that we would have known how to address if we were just diligent. Things that might have been a very localized threat -- for example, someone not being diligent about enforcing changing defaulted passwords -- where that might have caused a very localized threat to a small deployment, if that same thing happens on the virtualized system that's serving multiple services for potentially multiple customers, that becomes a very big deal. And that is why we've taken an approach of having a third-party audit and a third-party penetration test really, really enforced for almost everything we've been putting in place to date.
TT: In your career, what do you see network operators, network architects, etc., getting wrong in the beginnings of their organizations' digital transformations to virtualization that -- having gotten it wrong in the beginning -- can then or will then lead to a security disaster?
PR: That's actually a very interesting question. I think in places where there may have been security issues reported publicly -- and I'm not speaking directly of an experience at Verizon, but where you have seen things reported on publicly -- in many cases, there's just not a strong awareness in place that the security domain has to be treated as seriously as I've just described. So places that have gotten in trouble with some very big news things in the recent past, you can trace almost all of them to basically just lack of diligence. Perhaps not getting published security updates integrated into your system quickly enough, or perhaps having responsibility for those security updates in the hands of a single party with no third party or secondary audit to ensure that they actually got done, I think have been expressed in recent news events that you may have heard. And it is a requirement for a systemic approach to security that is ingrained in everyone's thinking from day one that drives a secure outcome. Not allowing a single party to have total control or responsibility without cross checks and functional checks and third-party review is probably the thing that most have gotten wrong. So it's not architectural as much as it is business process around security.
TT: So if I'm hearing you right, it sounds like -- virtualization or no -- hybrid, legacy or fully digitally transformed -- the basics are the basics.
PR: You know, the basics are the same. I think what changes when you come into virtualization is that the footprint of what is exposed probably grows, and it raises the importance of those basics.
— Joe Stanganelli, Contributing Writer, Telco Transformation