ARN

Capital letter 'S,' 'D,' 'N' will never happen: VMware exec

Rack and stack that network and then walk away and leave it alone
  • John Dix (Network World)
  • 13 January, 2014 12:13

Rack and stack that network and then walk away and leave it alone. VMware's NSX technology will provide all the control necessary going forward, says Steve Mullaney, senior vice president and general manager of VMware's Networking & Security Business Unit.

In a wide ranging interview with Network World Editor in Chief John Dix, Mullaney outlines the company's vision of software controlled networks, challenges other Software Defined Networking visions, including Cisco's ACI initiative, and outlines how the company will roll out higher layer network services. Mullaney claims the company is winning big accounts that will be made public this year, and that 2015 will see an explosion in adoption.

Describe the problem you're trying to solve.

I think IT shops are looking at Amazon and Google and Facebook and saying, "We need to be more like them." A primary driver is an agility requirement. IT has realized that, while it can do wonderful things on the compute side in terms of spinning up servers in seconds, the operational model of networking is still very manual, very static, very brittle. That's the primary problem we're solving. Along with that comes operational efficiency. At those big data center innovators one guy manages two or three orders of magnitude more servers than the guy in the average IT shop. So there's an efficiency thing from an operational perspective which ultimately relates to OpEx savings. And then on the CapEx side there is the same thing. People are asking, "How can I generalize my infrastructure and have commonality so I can ultimately be more like the Googles and the Amazons?"

What NSX does is it says, the way to get to that Promised Land is through what we call a software-defined data center. We've seen the huge transformational characteristics of server virtualization, but we need to virtualize all the infrastructure, and that means the network as well. The network is the key enabler for that SDDC vision.

Even though VMware endorses the software defined data center concept the company goes out of its way to avoid describing its network approach as Software Defined Networking. Why?

I guess I'm not really sure what SDN means because it means so many things to so many people. I think of it in terms of the small "s," small "d," small "n" meaning. Do you believe the future of the data center will be more defined by software than hardware? Yes, I do. Therefore I am an sdn, small letters, advocate. It's a philosophy to me. It's not a thing.

And so yes, I believe software will define it. I believe the way to get that is through network virtualization, where you decouple your software from the underlying physical infrastructure. We think of the physical network as a fabric, the back plane. Its job is to forward packets from Point A to Point B. We will tell you what to do with that packet, then you just need to forward it. I've completely taken the intelligence, other than forwarding, out of the physical infrastructure and put it in software. And then, through software, can create the illusion of a fully functional network with complex services all in software.

Basically what VMware did on the server side is safely reproduce the x86 environment in software, and now we're doing that on the network side with network virtualization. And once you've done that it's all programmatically controlled through APIs such that you can create logical networks, you can attach VMs, you can apply services, you can do all kinds of wonderful things in software. And then when you're done you hit a button and boom, everything goes back into the resource pool.

So that to me is software-defined networking with small letters. It has nothing to do with controlling physical switches and using OpenFlow to control those switches. All of this is done, again, with the philosophy of virtualization, which is decouple. That's the key word. You're decoupled from the physical infrastructure.

The key is not to have to touch the physical infrastructure. Leave it alone and do what you do as an augmentation. Make that physical infrastructure better without touching it. Some of the network people have kind of bastardized what SDN means. They say, "Well, since I'm a physical network company, SDN must therefore mean software control of all of my physical switches." No. That's like a better CLI. It's interesting, but it's not actually what people need. What they need is network virtualization and being decoupled from the physical infrastructure, because the whole point is to not to have to touch it.

For companies that go the other route and end up with some physical SDN controllers, will those controllers be able to interact with your controllers?

Absolutely. We've talked publicly about things we're doing with HP. HP's SDN controller will control their physical hardware and we'll do some federation with them. And if somebody wanted to control their physical infrastructure -- I can't think of any reason why they would want to, but if they did -- we'd say great. Go for it. We are very complementary to that.

You folks are talking about rolling out various upper Layer network services in software. Expand on that a bit. 

Firewall is a perfect example. All of our firewall intelligence is at the edge of the network, either in the vSwitch or in a top-of-rack physical switch. And then the distribution and core, the physical part of the data center network, just looks like an L3 network that forwards packets, and that's it. You rack it once, you wire it once, you never touch it again.

So we build effectively what is a distributed scale-out version of a firewall. There's a little piece of firewalling at every vSwitch. And as you add more compute nodes you add more firewalling capability, and when you move VMs around that firewalling capability moves around with it.

That's really good for East-West firewalling between servers within the data center. The big firewall vendors tend to have big honking boxes at the North-South end of the data center. Well, guess what? The bad guys are everywhere. Yes, you still need the North-South gateway firewall, but a lot of companies now are saying they need East-West firewalling, but to build that with physical appliances would be incredibly expensive. And that approach is also very static and brittle in the sense that you have to decide how much capacity you need at the beginning and build up a DMZ, and then if you surpass that capacity you have to go build another one, which will take months and is expensive.

Compare that to doing it in a network virtualization way. As I grow I'm adding more firewalling capacity and it's in software so there's no more appliances to buy. And because it's built into the kernel of the hypervisor, it's incredibly high performance. And so now I can build effectively what becomes on-demand DMZs, DMZs that will scale out as my application needs scale, and I don't have to buy a whole bunch of CapEx equipment up front. I get to do it very much more efficiently and then, as things change in the data center, as VMs move around, all of my firewall policies move along with it.

So it's very much an incremental opportunity that the current firewall vendors just really can't satisfy. They're not, per se, losing out on an opportunity. It's an opportunity that only really VMware is going to be able to get. And then what we do with folks like Palo Alto Networks, who we recently partnered with, is map through their management interfaces to integrate policies such that it will work together with the devices they have as well as our distributed firewall. So I view it as a complementary thing.

Besides firewalls, what other kind of services will you offer?

Load balancing, for one. Customers say, "I've got a lot of affinity for F5. You guys need to integrate with them." We've announced a partnership with F5, but we haven't announced the level of things we're doing, but is very similar to Palo Alto. Over time you're going to see us become this network virtualization platform that will integrate with partners.

Let's switch to comparing and contrasting your approach to that being pursued by Cisco. How do you sum that up?

At the highest level there are things we completely agree on and then there are things we are in complete disagreement about. We agree on the problem. We agree on the benefit. So basically when Cisco came out with their ACI launch it was really good from our perspective because they validated everything we've been saying for years. And from a customer perspective the thing you're looking for, before any market is going to cross from the early adopters to the mainstream, is consistency of the problem statement and the benefit.

+ ALSO ON NETWORK WORLD: Understanding software defined networking +

Cisco came out and said everything VMware has been saying is absolutely right. The network is the problem. We need operational efficiency and we need to deliver this agility. We need to be able to deliver applications faster. We need to be more like the Amazons of the world. Beautiful. So now a customer hears the exact same thing from us and Cisco. So now the customer says, "Great. I've got two alternatives."

But how we go about it couldn't be any more different. It's the complete opposite. We believe in the software-defined data center. We believe in the power of virtualization to enable that. We believe in the power of decoupling software from the physical infrastructure. Cisco came out and said, "We believe in the hardware-defined data center. We believe in the power of ASICs. We believe in the power of coupling the software to the hardware. We believe in coupling the software not just to any hardware but to our hardware. And oh, by the way, it is also our new hardware so you will need to rip out your existing infrastructure and replace it."

So it's very different. It effectively boils down to a profession of faith. What do you, as a customer, believe in? Do you believe in the power of software, that the power of virtualization is going to lead you to the Promised Land? Or do you believe in coupling to hardware and new ASICs?

And you know what, there will be people that will believe in that. Cisco has been their partner for 25 years and has served them well. Right? But you look through the history of IT, most of the time decoupling in abstractions in software wins out. And I think we're starting to see that with the early adopters. What's exciting is people are picking their architectures right now. It's happening. Which is why Cisco had to come out and announce now, even though their products aren't available for a year. Because they saw architecture decisions being made.

Another truism in the history of IT is the need to evolve what you already have. Given the huge amount of dollars invested in network infrastructure, no one is going to rip it all out and start anew.

Absolutely. Which is why our story is so much better. A lot of people have Cisco, and you know what I tell them? They have great products. Keep them. You don't need to rip them out. Customers want a solution that is disruptive in its benefits but non-disruptive in its deployment. We can help them do what they want to do but with their existing infrastructure. Cisco's ACI, guess what that says? "Oh, no. You've got to buy new hardware. You're going to rip all that out and you're going to put in the new hardware with the ACI chip." That ain't going to go over well. Trust me.

You will probably protest, but there is a lot of industry chatter about the inherent limitations in your overlay approach. What are those limitations in your view?

If you look at what Cisco has done, it's a very similar architecture. They do exactly what we do; they use overlays, but they used proprietary headers in VXLAN and they tie it to their physical hardware. I get what they're doing. They make money when they sell hardware so they have to tie it to the physical hardware. We look at it and say, "Not necessarily." I think it's good to give the customer choice.

OK, but you didn't really answer the question about the limitations of the overlay approach. For example, you say rack and stack and leave it and we'll do the rest, but you still have infrastructure provisioning and optimization and management issues to deal with, which capital letter Software Defined Networking promises to address. 

I've been in networking for 25 years and I can tell you that vision will never happen. People will talk about that for another five years and then they'll grow tired of it. Watch. That will never happen because it's not needed. I mean, one of the things is there will be connections where there need to be connections and there will be interfaces between the overlay and the underlay, but all that is needed is a loose coupling. It does not need to be a hard coupling.

People talk about elephant flows and mice flows, where an elephant flow is a long-lasting big flow that can stomp on smaller flows, the mice flows, and make for a bad SLA for those mice flows, and say you need a tight coupling of the overlay and the underlay for that reason.

Hogwash. From inside the hypervisor we have a much better way to actually highlight those elephant and mice flows, and then we signal to the physical infrastructure, "This is an elephant flow, this is a mice flow, go do what you need to do." And we'll be able to have that coupling not just for one set of hardware, but for everybody, whether it's Arista or Brocade or Dell, HP, Juniper, etc. We'll be able to work with anyone and actually do that handoff between the overlay and the underlay. So you can go through every single one of those examples and show that a generalized solution and a loose coupling is actually as good or better and gives you the flexibility of choice.

How do you do traffic engineering across the whole network though, if you're trapped in your world?

If you look at the management and the visibility of networking now, it's horrendous. So through network virtualization you actually improve the visibility because of our location in the hypervisor. As soon as everything went virtual, the physical network lost visibility because it wasn't in the right spot. The edge of the network has moved into the server, so you have to have a control point inside the vSwitch as a No.1 starting point. And honestly, once you own that point, you have way more context about what's going on and what applications are being used and response time and everything else like that compared to if you're just looking at headers inside the physical network. When you're looking at a packet inside the network you don't have a lot of context.

How do the Amazons and the Facebooks and the Googles of the world do it today? They build software-defined data centers. They buy generalized physical infrastructure (most of them actually build their own), and they create high performance L3 switch fabrics that do one thing and one thing only -- they switch packets in a non-blocking manner from Point A to Point B. That's what the network is going to become in data centers. So you rack it once, you wire it once, and you never touch it again. That's what's going to happen.

A lot of people trot out those companies as examples, but they're such rarified environments that they have precious little to do with real-world data centers.

You're right. No one else is like them, and they're specialized environments. But what if you could get close to that type of operational model?  Can you build a generalized IT infrastructure that gets you closer to how those guys build infrastructure? That's what VMware does. That's what we're going to enable people to do. And it is a journey, and you've got to be able to leverage existing infrastructure and then take baby steps along the path, because you can't just rip and replace. That's what we do. That's what virtualization does. That's why it's so exciting.

Do you have any limitations in terms of what you can achieve across multiple data centers? 

Right now most people are focused just inside the data center. But absolutely what we look at is a system view of VM-to-VM inside the data center and across data centers. So linking into MPLS backbones and then popping out the other side, creating a logical network that has VMs in one data center as well as VMs in the other that look like they're in the same logical network. that absolutely is what you're going to be able to get with network virtualization. And not just your other data centers, but external data centers that you use for disaster recovery and things like that.

What does all this greatness cost the user? How do you price your stuff?

It's priced per port. That's how networking people are used to buying. When you buy physical network gear you may buy it as a box, but basically you divide it out by 16 or 12 or whatever number ports, so you've paid per port. The good news on this is you're only paying for what you use, so you're not fixed to some increment of 48 ports or whatever it happens to be. However many virtual ports you are using, that's what you pay for. Then as you grow you pay more.

So I've already paid for my physical network, now I have to pay more?

The thing is it's making your physical infrastructure better. It was the same with server virtualization. You already bought the server, so why are you buying server virtualization? Well, because you want to make that server better. You want to make it better in terms of CapEx. You want to make it better in terms of OpEx. So it's the same thing with the network.

You already bought a physical network and paid X for it. That's a sunk cost. But now when your favorite network vendor comes in saying you need to upgrade, because of me you can tell him "No thank you. I think the gear I have now is perfect. It's all I need. In fact, I can delay that upgrade for another three to four years. Thank you very much."

We've had many customers look at this as a CapEx deferment. They had budgeted a massive CapEx upgrade to get this type of functionality, but now they don't need to do that and they're putting their money into software instead of the physical infrastructure, and this is a hell of a lot easier than ripping and replacing my gear, and cheaper.

Do you have any reference points to show what kind of success you're having?

You're going to start seeing a lot more customer wins. People are making these architectural decisions now and we're winning them. So we're going to start marching these people out.

And from a revenue perspective, we have told financial analysts that we'll be material from a VMware perspective in 2015. We have customers in production. We're doing revenue now, lots of it, but when you're part of a $7-billion-per-year company, what is material? Right now the important thing is winning those architectural decisions. And I'm talking top financial companies, top service providers, top media companies and the leading enterprises. 2014 is when we're going to trajectory out across the chasm. It's going to happen.

And as soon as we do that the tornado will hit in 2015.