Is your enterprise IT organization spending too much on the cloud? Or, perhaps, not enough? How would you even know?
One of the problems of the cloud -- whether you've been there for a long time already or if you're just beginning your rip-and-replace efforts in moving from legacy on-premise systems -- is one of cost- and use-management. Between a variety of cloud computing plans on top of loads of hidden fees and other SLA "gotchas," it is imperative to be able to know the exact extent of cloud resources that your enterprise really uses and demands.
For the first installment of this two-part Q&A, which was lightly edited for clarity, Chris McReynolds, vice president of core network services for product management at Level 3 Communications Inc. (NYSE: LVLT), gave Telco Transformation his insights on how enterprises can better effectuate the cloud monitoring they need. In Part 2, McReynolds goes into greater detail on the respective roles of cloud and virtualization in enterprise IT.
Telco Transformation: How can IT best effectively measure and monitor its specific workload requirements in the cloud?
Chris McReynolds: That one's hard. I'll give you a network perspective on oversubscription. So we sell X amount of bandwidth because that's how customers have sized it. That's what they think the right amount is, and then we get to the cloud providers where they oversubscribe. If customers have bought, for example, a total of 100 Gbit/s of capacity from Level 3, and there are five telecom providers each with 100 Gbit/s, those cloud providers--when it actually gets into their server infrastructure--might only deploy 100. So maybe there's 500 Gbit/s of purchase capacity coming into them, but they look at the traffic flows and they're only seeing maybe 80 of that 500 of continual traffic flow. So people are definitely over-purchasing on the network.
It's really hard to troubleshoot application performance because there are so many different places it could break down. It could be in your enterprise headquarters, it could be the local-area network. Or it could actually be the connection down to that headquarters to your wide-area network. Or it could be that you don't have enough virtual machines or servers in your public cloud environment operating. Or it could be that the application is poorly written for moving from a legacy infrastructure to a public cloud environment. There are some tools out there to help diagnose those issues, but I think that's an area of massive opportunity for new companies.
TT: In terms of measuring and monitoring workload environments in the cloud, to what extent, if any, can virtualization help here?
CM: It can help... On our network services, we have a service called enhanced management that just shows a lot of detail around things that would impact application performance. So jitter, latency, packet loss, and elements like that help an enterprise troubleshoot. I don't see anyone doing this other than system integrators that are manually putting the pieces together themselves to give enterprises that insight. I haven't seen many applications that give true end-to-end visibility with all of the piece parts built up that lets you troubleshoot that in a really easy fashion. So the way to do it with virtualization is, piece by piece, start adding and subtracting resources to see if you can determine if that's where the fundamental issue is.
Want to know more about the companies, people and organizations driving developments in the virtualization sector? Check out Virtuapedia, the most comprehensive online resource covering the virtualization industry.
So can we see greater visibility with virtualization because of the very nature of things like SDN and NFV, where you have to have these constant updates?
CM: Yeah. I think that that's a fair statement. I do think that on the private side, for sure, when you're virtualized, you have better visibility into what is running where and it's consumption of those server capacity assets. That, tied with the views that the public cloud providers publish via their APIs, would give you a better overall visibility of experience.
TT: Is it easier for me to measure and monitor my virtualized cloud workload usage if I'm already starting in a virtualized cloud environment, or is it easier to do that if I'm starting in a completely on-premise legacy environment before I make that move?
CM: I think the easiest to manage and monitor is the born-in the cloud environment or the new applications you develop that are solely hosted in the cloud because each individual cloud provider actually has a pretty good view into that application. When you start bridging old infrastructure to new infrastructure, and you're sending data back and forth between public cloud and your private infrastructure, the private world doesn't have an easy dashboard, so to speak. So really what people are trying to develop is a view to roll up your private infrastructure and connect that view with the public cloud infrastructure. Born in the cloud, I would say, is hands-down easiest to get that visibility.
— Joe Stanganelli, Contributing Writer, Telco Transformation