Software Defined Everything: Moving from Simple Virtualization to Business-Critical Services

Posted by Erin Dunne, Director of Research Services, Vertical Systems Group

Virtualization and software-defined networks should be a means to an end — not an end to itself. The goal of a data center, after all, is to deliver services (such as applications and microservices) to end users or to drive business processes. It doesn’t matter if those applications run on physical servers or virtual servers or in Infrastructure-as-a-Service (IaaS) or containers – the goal is to deliver those services.

Let’s take a step back. Of course, it matters if those applications run on physical or virtual servers; on-prem or in the cloud; or on traditional or software-defined networks. The technology is still evolving, and so are the offerings of service providers, who are scrambling over each other to find new ways of efficiently turning rigid systems into virtualized software-defined systems at every level.

According to Erin Dunne, Director of Research at Vertical Systems Group, it’s helpful to define the services that IT wants to deliver as customer-facing services — something an end-user customer or business customer finds valuable, and is willing to pay for.

“That’s key,” she said, “because the enterprise customer is the beginning of a value chain.  If the vendors, the service providers, develop OpEx reducing technologies, and that’s all it does is reduce OpEx, that’s fine, but is that sustainable? Probably not. They need someone to pay for the applications.”

The complexity kicks in when those services become more complex and dynamic, Dunne added, requiring orchestration between multiple servers, applications, databases, and even clouds. How do vendors provision those services on the back end? How do they deliver them rapidly? How do they bill for them?

“What are the most important drivers and challenges that you see when you deploy dynamic and orchestrated services?  Pretty much by definition dynamic orchestrated services has to be software enabled, because they just doesn’t work in the legacy infrastructure,” she said. Such services need “faster service provisioning, rapid adjustments to existing services, and the ability to scale bandwidth quickly, sometimes instantaneously.”

With deployment, Dunne pointed to questions about “how do you orchestrate, not only over your own network, but over multiple networks including access networks, long haul networks, data centre providers, all of those types of service providers?” Similarly, she explained, are questions about the OSS and the BSS systems. “If you can’t bill for it, you can’t deploy it. We’re here to make some money!”

For older companies, Dunne mentioned, there are “legacy infrastructures and legacy services. How do they integrate these new software enabled services with what they already have?”

Assume the Network is Virtualized

Service delivery requires application controls, and effective application controls change how networks operate, said Nick King, Vice President, Cloud Solutions at VMware. “We have to assume that most applications, particularly cloud-oriented applications, will assume the network is virtualized at some level. If the applications are containerized, the control will look at what sort of container runtime system the applications are running on, and that’s usually presented in some sort of virtualized networking.”

“As we look at traditional systems where applications are sitting inside our own data centers, can those applications span across both sides,” that is, to the cloud, King asked. “That’s the big challenge for us. We’ve seen such a quick movement from software-defined to really-broadly-software-defined, like software-defined data centers and networking running into the cloud. That’s really going to shift in a very short amount of time.”

King also mentioned developing new cloud-native applications – which are often delivered as services. “It used to be that you’d build a service and deploy it: The service would live in one location. Today what we’re seeing with Kubernetes and Docker is that applications are living anywhere, and that networking is also changing with that. We have to make the network abstraction happen as fast as possible across on-prem, private clouds, and public clouds on Azure, Google, etc.”

Align Service Definitions to Business Objectives

It’s all about business objectives, said Jeff Baher, Senior Director, Product & Technical Marketing, at Dell. “There is a tendency, when we look at the technologies, to start from the bottom up. With SDN, there was an early focus really on the networking layer, then applying the software, building it up, and software to find storage.  Ultimately you get to this top of the stack.”

However, he said, the end goal is software-defined businesses — and that’s the best place to start, and then figure out the technologies needed to create that software-defined business. Why don’t we hear more about it? “It is probably deeply proprietary. If you look at an Uber, or an Ancestor, or Facebook, the software-defined businesses are deeply proprietary.”

Baher explained that in order for software-defined resources to “really gain traction and roots, it needs to be deeply aligned to what the business objectives are. That’s key to understanding which technologies will matter and how they then get assembled to ultimately drive a business.”

The Drive to Connect with Public Clouds

Connectivity matters. There’s no point is creating wonderful services and applications if users can’t get to them, said Sunit Chauhan, Senior Director of Product Management with Nuage Networks, a Nokia company. “One of the trends that we’ve seen, whether it’s SDN1.0, SDN2.0, is this movement towards the public clouds.”

“There are certain trends that the networking industry, vendors in this industry can’t control,” he continued. “We are not going to be able to control where users reside, we are not going to be able to control where applications reside. The challenge for us to provide that secure seamless connectivity on demand in an OpEx/CapEx model that spans all of these different domains. If we are building solutions that are siloed into domains then we’re not really solving the problem.”

What about aligning software-defined services with a profitable business model? “What is really important is that we have that abstraction layer at the top, but provide a single abstraction northbound to these different systems, because you’re not going to control where the applications reside. You’re not going to control what applications users are running. That becomes an important aspect,” in making a business, Chauhan explained.

Abstraction Aids Large-Scale Deployment

“When you look at software abstraction across servers, storage, and networking, that is fundamental for both your private cloud deployments as well as hybrid and public,” said Gregg Holzrichter, Chief Marketing Officer at Big Switch Networks.

“We have been able to develop that type of abstraction as packaged software, and allow that to be deployed across any organization,” he continued. “When you’re looking at large organizations, whether it’s enterprise or service provider, providing pay to play, it’s all about efficiency and automation.  It’s all about the APIs and programmability, which has been sorely lacking in networking silos until recently.”

Moving to the Pay-to-Play Model

Enterprise customers like the cloud model for paying for resources consumed, rather than building data centers, said Mike Frane, Vice President, Product Management, for Windstream Communications. “We found that as customers are moving to this pay-to-play model, as they move more of their applications and their capabilities to the cloud, they don’t typically think of the entire value chain.”

“Containerization all happens in the cloud – in Amazon Web Services, in Azure, in other locations,” he explained. But customers don’t think about how data is going to get down to each of their individual endpoints, which may be running on a T1, or on an MPLS network,” or even on DSL or cable.

So, Frane continued, “As customers do these evaluations, they have to ask, ‘Is my network ready, my cloud is ready, my strategy is ready.’ – but the network may not be ready.  In some cases, customers have to augment their network, or make changes to different technologies, or different access types. So, when they look at the pay-to-play cloud model, they need to look at it holistically end-to-end.”

Put the Controls In Before Services Can Fail

There’s a cautionary tale there, warned Russ Currie, Vice President of Enterprise Strategy at Netscout, pointing out that any migration to software-defined everything has to be robust, and failure is not an option when implementing critical business service. “We kind of get enamored with the idea of ‘fail fast,’ but that’s not acceptable when you’re putting up production applications out there, and your users are dependent upon those applications to get their jobs done.  That just isn’t going to fly.”

“Providing the kind of visibility and control to handle the complexity that we’re adding absolutely required,” he added. “That’s one of the bigger challenges that we face as we try to move so fast in rolling out new services.”

Virtualization Is Eating the World

“Software is eating the world, and if you look at some of the incumbent vendors, this means disaggregation, concluded Big Switch’s Holzrichter. “Just as the VMware disrupted the x86 market 15 years ago with the smart software layer, a smart software layer can do the same thing and disrupt networking.  Having that concept of managing your entire network through a single pane of glass, not box by box, is that underpinning.”

From software-defined networks to software-defined wide-area networks to software-defined data centers: Those are the means. Software-defined businesses, more agility, more profits: That’s the objective. Software-defined everything: That’s the answer.

Tags: