NFV is no more secure -- and may be even less secure -- than legacy systems if it's not implemented well, argues BT compliance executive and ETSI NFV Security Group Vice Chair Alex Leadbeater.
As telcos struggle with conjoining virtualization hype with virtualized-network security, European Telecommunications Standards Institute (ETSI) is working to help its members and collaborators by bringing security basics back to digital transformation -- something that Leadbeater suggests can be too easily forgotten in part one of this Q&A on NFV security considerations.
In part two, look for Leadbeater to comment on how virtualized-network security is impacted by network automation, microservices and varying degrees of separation.
Telco Transformation: Well, let me start off with the most loaded question I can think of: Is virtualization inherently more secure than the legacy way of doing things, or less secure -- or is the truth somewhere in between?
Alex Leadbeater: I will give you the answer, and then we can debate how we arrive at it -- and the answer is "somewhere in between." A well implemented virtualized network is probably more secure than a legacy one.
The question is then, "Well, okay, how do you make a well implemented NFV network that is secure?"
There is nothing stopping you doing it. I think the only difference is that it's much easier to take a virtualized system and make a bad job of securing it, whereas legacy largely got away with it by security through obscurity. So it's a matter of comparing like with like. And legacy, for years, got away with being relatively secure by only allowing certain people in the club. NFV, virtualization, the Internet cloud model -- they all let everybody into the club. So, actually, you put in place more security than you had on the legacy side. And the answer is therefore "somewhere in between."
TT: It sounds like, regardless of the level of virtualization, security fundamentally comes down to how well implemented the solution is from the get-go. Am I stating that correctly? Or do you have a little bit more leeway for mistakes in virtualization implementation?
AL: So I think there are two factors here. One is that the attack surface is increasing -- so the number of people who are potentially interested in attacking telecom's infrastructure or cloud infrastructure increases year on year, and the number of viruses and the number of overall threats increase.
The other difference is that with open source and other software on which most of the virtualized technologies are based, hackers have a starting point with it. So if you don't design security into the heart of your virtualized technology, then you are going to have a problem.
Legacy, as I say, got away with it because it was predominantly proprietary and it was a closed user group. Is the older technology more secure once you strip away that proprietary knowledge jump required to understand how it worked and also in order to get anywhere near it in the first place? No, it isn't. So it's a matter of the threat landscape changing over time, and each is vulnerable to different things.
TT: What are the mistakes that you commonly see over and over again in virtualized network implementations, orchestrations and architecture?
AL: Some companies learn the hard way and some last a while longer, but it's a matter of: When you start a virtualized design, is security what you take with you from day one? Is it one of the underlying principles, or is it something that you throw in in the version 2.0? So the most common mistake is not doing security until version 2.0. So that is the start point.
The second thing I think people make a mistake on is that they're sure that vulnerabilities will not occur to them -- and, therefore, there's a certain mindset that says I will design security to prevent attack, and so on. "I've implemented A, B, and C, and A, B, and C means my product is secure; it means that people will not get in." That in and of itself is the biggest mistake with virtualized or cloud technologies -- or, in fact, even a legacy model. The same applies. What you really have to think about is that vulnerabilities do occur. Software, no matter how well written, will very often have either underlying or even accidental vulnerabilities that occur because something either changes or the software was just not good enough when it was written.
The mistake is to assume that things will not occur. And when you deal with this, you say, "Okay, vulnerabilities may occur. People may attack the network." The people that get it right design their products securely so that when there is some form of failure in the product, it doesn't become an absolute, effectively, zero-day exploit. So a small piece of the product may fail but actually the rest of the system will continue entirely happily unaffected by whatever happens. And that is the difference between good security and bad security.
TT: What are some other security considerations for legacy versus virtualization?
AL: I'm frequently asked that question via internally or vendors asking me, "What sort of solution are you looking for?" Or, indeed, wider industry or government people are asking, "What are the impacts of virtualization in security?" And it does come back to the notion that NFV -- or the virtualization cloud technology -- implemented well is potentially much more secure than its predecessor. So with security by default, security by design, we are actually designing in end-to-end encryption, we are actually designing in well thought out security between VMs. You're placing cryptographic keys in hardware security modules and you are fundamentally designing things to run on hardware-mediated execution environments. These are all things that run on NFV-SEC, and ETSI NFV's been leading it.
Legacy networks were designed on a principle of most large telecommunications companies generally coming out from the postal communities in the 1950s, and before were large monolithic government entities that were only a few hundred at most. And normally they sort of trusted each other on the basis of "You do something to me, I'll do something to you." It's the nuclear MAD -- Mutually Assured Destruction -- principle. That model is long gone. You'd be mad to think that's going to result in your networks being secure.
So actually what I say to people is: "NFV done well is a good thing for security. NFV done badly is far worse than legacy because the number of input attack vectors is much, much larger." It's very difficult to attack an X25 generation C7 switch from an 18-year-old's bedroom with an IP connection because it doesn't have an IP address.
Every NFV component has an IP address; it has a lot more input threats to it. However, well implemented, with continuous security monitoring with both runtime and boot-time attestation, it has the fundamental principles of being more secure. And, done well, it's massively more flexible. So it is a direction of travel and I think it's going to be around for a long while to come. In order to learn to ride a bike, you've got to remove the stabilizers, and in removing the stabilizers you may fall over from time to time. The trick for virtualization security is to try not to fall over too many times.
— Joe Stanganelli, Contributing Writer, Telco Transformation