X
Business

Data centers: The future is software-defined

There's plenty of demand for IT automation among enterprise tech decision-makers, but a lot of application and infrastructure modernisation is needed before digital transformation can really take off.
Written by Charles McLellan, Senior Editor
dc-automationintro-header.jpg
Image: PhonlamaiPhoto, Getty Images/iStockphoto

The role of the traditional on-premises data center has been changing for years, as enterprise workloads have made an exodus to the cloud, and more recently to the edge of the network, in pursuit of benefits including cost savings, better performance and greater flexibility. According to analyst firm Gartner, 10 percent of enterprises will have shut down their traditional data center by the end of 2018 -- a figure that's expected to rise to 80 percent by 2025. Cisco, meanwhile, predicts that by 2021, 94 percent of workloads and compute instances will be processed by cloud data centers, with just six percent processed by traditional data centers.

Still, for the next few years at least, there will be plenty of enterprise data centers housing workloads that, for whatever reason (technical, economic, security or compliance, for example), have not migrated to the cloud. A big task now facing enterprises is to assess the cloud-readiness of their on-premises application portfolios, and discover how best to make their data centers more efficient and agile engines of digital transformation by maximising the level of automation.

Some legacy workloads may remain on-premises, running on traditional IT infrastructure. Many will be modernised in various ways to run on private or hybrid cloud infrastructure, while others may exit the enterprise data center altogether to run on AWS, Microsoft Azure, Google Cloud or other hyperscale public clouds. Placing workloads on the most appropriate infrastructure, and managing them efficiently wherever they end up, is the strategic roadmap for many CIOs.

Automation will play a key role in all this, and will be greatly helped by the advent of the software-defined data center (SDDC) in which virtualised compute, storage and networking resources are orchestrated via a layer of management software.

Here's a roundup of recent surveys and research reports that address some of these topics.

Data center trends

uptimereport-cover.png

2018 Data Center Industry Survey Report(Uptime Institute)

The Uptime Institute's 8th annual data center survey examined the key trends driving the IT infrastructure industry, drawing on data provided by 900 data center operators and IT practitioners worldwide. The survey population had a roughly 70/30 split between enterprise IT managers and service providers.

A third (31%) of respondents reported a data center outage in the past year (a rise over the 25 percent in 2017), with 80 percent considering that their most recent outage was preventable -- a combination of human and management error, said Rhonda Ascierto, vice-president of research at the Uptime Institute, in the accompanying webinar. The leading causes were on-premises power failure (33%), network failure (30%), IT/software error (28%) and on-premises non-power failure (12%). Significantly, 31 percent of respondents had a failure at a third-party provider -- where the level of visibility and control is generally lower.

The average downtime incident lasted 1-4 hours, while just over 10 percent of outages lasted for 24 hours or more. Although 43 percent of respondents didn't calculate the cost of a significant outage (perhaps due to the multiplicity of factors involved), half estimated it at less than $100,000, while eight respondents lost over $10 million. "When downtime does happen, it certainly hurts," said Ascierto.

Hybrid IT setups -- combinations of on-premises, public cloud, SaaS and/or colocation -- are becoming increasingly common, and can result in multi-site IT failures due to interactions between applications, data and services. In the Uptime Institute survey, 24 percent of respondents reported incidents that impacted services delivered from multiple data centers. "Outages are becoming increasingly complex because of the increasing interdependencies in the way IT is architected," Ascierto said.

The survey asked whether a hybrid IT strategy, with workloads spread across on-premises, colocation and cloud data centers made their organisation more resilient. Although 61 percent said that it did, 30 percent said it made them less resilient (and 9% didn't know). "What this data says is, there is a significant cost with hybrid strategies, in terms of increased management and often integration complexity," said Ascierto.

Ownership of the resilience issue was in the hands of a single department head or executive in about half (49%) of respondents' organisations. This is usually the CIO or CTO, said Ascierto, but 7 percent of the survey population had a CRO -- a chief risk or resiliency officer.

Respondents adopted a range of methods to achieve resiliency in hybrid IT environments: backing up to a secondary site (68%); using near-real-time replication to a secondary site (51%) and two or more data centers (40%); utilising disaster recovery as a service (42%); and using cloud-based high-availability services (36%).

A key finding from the Uptime Institute survey was that Data Center Infrastructure Management (DCIM) software now seems to be a mainstream technology.

DCIM software provides information about data center assets, resource use and operational status, and when combined with IT management data "can deliver insight from the lowest levels of the facility infrastructure to the higher echelons of the IT stacks for end-to-end IT service management taking into account the availability and resiliency of physical data center resources (power, cooling, space, and networking)," said the report.

Over half (54%) of survey respondents said they had purchased commercial DCIM software of some sort, while another 11 percent had deployed a homegrown solution. The most common motivations for deploying DCIM were capacity planning (76%) and power monitoring (74%). Generally, the survey found that DCIM software delivered on expectations:

uptimedcim.png
Image: 2018 Data Center Industry Survey Report

Demand for IT automation

spiceworksreport-header.png

The 2019 State of IT Survey / Future Workplace Tech(Spiceworks)

Spiceworks, the professional network for the IT industry, collated responses from 780 business technology buyers from organisations in North America and Europe for its 2019 State of IT report. The first part of this report, which is based on survey data collected in July 2018, covered IT budgets, while the second part examined future workplace tech.

IT automation was the top-ranked emerging technology in Spiceworks' survey, with 41 percent of IT decision-makers expecting it to have the biggest impact on their business. Building blocks for software-defined data centers like serverless computing, hyperconverged infrastructure and container technology were also cited, although these technologies were outranked by IoT, Gigabit wi-fi and artificial intelligence:

spiceworks-business-impact.png
Data: Spiceworks 2019 State of IT report / Chart: ZDNet

Spiceworks found more expectation of business impact from IT automation among North American IT decision makers (47%) than their European counterparts (35%), while it was highest (50%) in companies with 1,000 to 4,999 employees. Large enterprises (5,000+ employees) actually had the lowest business impact expectation for IT automation (37%), but the report didn't speculate on the reason for this. Financial services was the leading industry sector currently using IT automation technology (43%), with consulting bringing up the rear (28%):

spiceworks-sector-adoption.png
Data: Spiceworks 2019 State of IT report / Chart: ZDNet

There's clearly plenty of impetus for automation among IT decision-makers. This will not only deliver increased operational efficiency for existing business processes, but also make it quicker and easier to bring new business ideas to fruition. But to make this happen, organisations need to examine their application portfolios, and understand how best to modernise them.

Modernising enterprise applications for the cloud

skytapreport-cover.png

Realities of Enterprise Modernization: Urgency Mounts for a Clear Path to Cloud (451 Research & SkyTap)

For the June 2018 Realities of Enterprise Modernization report, analyst firm 451 Research and cloud provider SkyTap surveyed 450 enterprise IT decision makers in the US, Canada and the UK. The most prevalent employee count was 1,000-4,999 (41%), while the leading role among respondents was VP/Director of IT operations (33%).

The 451 Research/SkyTap report outlined the situation many enterprises find themselves in:

"Despite industry attention on the cloud, traditional, on-premises applications and infrastructure offer advantages for security, control and meeting internal expectations. Nevertheless, they come with their own significant challenges, including slow delivery of new capabilities, lack of knowledge, experience or skills to manage these applications and underlying infrastructure, and cost issues."

The survey found that nearly one in five (19%) organisations had over three-quarters of their applications still running on traditional on-premises infrastructure, while over half (55%) put their 'legacy' application count at between 51 and 75 percent.

skytapon-prem-apps.png
Data: 451 Research & SkyTap / Chart: ZDNet

Respondents cited a range of applications -- CRM, ERP, data and analytics, web and media (consumer-facing), email and collaborative -- as unsuitable for the cloud.

Even so, there was evident determination to move legacy application portfolios to cloud platforms, with 20 percent of respondents planning to migrate or modernise over three-quarters of their apps and nearly half (47%) intending to move half to three-quarters to the cloud in some way.

"Although organizations increasingly want cloud capabilities in traditional, core business applications, they are still unsure of how to achieve that most effectively and efficiently," said the report.

The 451 Research/SkyTap survey looked in detail at enterprise strategies for cloud migration -- rewriting, refactoring, replacing with SaaS, 'lift and shift' or 'lift and extend'. Refactoring -- changing application code before or as it's migrated to cloud -- was the leading strategy overall, followed by rewriting from scratch and replacing with third-party SaaS:

skytapmodernisation.png
Data: 451 Research & SkyTap / Chart: ZDNet

There was variation among industry sectors, with High Tech most likely to rewrite traditional applications from scratch, Retail and CPG (Consumer & Packaged Goods) most likely to replace with third-party SaaS, Leisure/Hospitality most likely to lift-and-shift, and Healthcare most likely to lift-and-extend. "The differences in migration approach within and across industries underscores the confusion among technology leaders on the best route to cloud," said the report.

The prevalent cloud deployment model in the 451 Research/SkyTap survey was private cloud (70%), followed by hosted private cloud (56%). Despite the hype surrounding public cloud providers, usage of a single hyperscale public cloud was only cited by 48 percent of respondents, with multiple public cloud usage bringing up the rear at 19 percent:

skytapcloud-type.png
Data: 451 Research & SkyTap / Chart: ZDNet

When it came to software development methodologies, DevOps was the clear leader overall, followed by Agile, Waterfall, Six Sigma, Twelve Factor and ITIL.

The 451 Research/SkyTap survey found that, as well as prioritising the recruitment and retention of staff with the skills for modernising and migrating existing applications from on-premises to the cloud, enterprises were also seeking cloud-native skills. "This means enterprise technology leaders must clearly understand their organization's needs so they can effectively balance the skills demands of existing, mission-critical applications that run the business with the disruption of cloud-native innovation focused on the future," said the report, which also advised enterprises to "broaden their thinking beyond a few hyperscale providers".

The report concluded that a holistic approach is required, with a focus on modernising infrastructure, processes, people and architecture as well as applications:

"No matter the cloud strategy, technology leaders must prioritize self-service and the ability for developers and operations teams to rapidly access secure, sanctioned infrastructure with minimal provisioning to streamline change management and eliminate deployment bottlenecks. Enterprises also need to plan to protect against the unintended consequences of workload degradation or even failure. Applications and workloads should be crafted to easily and readily shift to alternative execution venues when needed."

Automation in private clouds

The benefits of automation in private cloud data centers were evident in a 2017 study by 451 Research, sponsored by VMware. Among 150 IT decision-makers in the US who were using private and public clouds, 41 percent said their private cloud was cheaper than public cloud options on a per-VM basis. Control over data was the number-one reason for selecting a private cloud (71%), followed by cost efficiency (54%).

When asked to choose the single most important factor contributing to savings using private cloud, the top three were capacity-planning tools (17.9%), favourable software licensing terms (17.2%) and automation tools (16.6%). Respondent interviews revealed some specific benefits of automation: an 80 percent reduction in full-time employee expenses; 25 percent increase in productivity; a 30 percent reduction in downtime; and a 30 percent efficiency improvement in resolving incidents.

Even though 59 percent of respondents were paying more for their private cloud compared to public cloud, only three percent incurred more than a 50 percent premium. The report speculated that relatively minor premiums might be justified compared to the potentially high cost of re-architecting and recoding some applications for migration to the public cloud.

The software-defined future

forresterreport-cover.png

The Software-Defined Data Center Comes Of Age(Forrester)

Real-world enterprise IT remains a complex mix of traditional, private cloud and public cloud platforms that can make effective automation difficult. In a November 2017 report, analyst firm Forrester examined the evolving state of what's widely seen as the solution: the software-defined data center, or SDDC.

What is a software-defined data center? Here's Forrester's definition:

"An SDDC is an integrated abstraction layer that defines a complete data center by means of a layer of software that presents the resources of the data center as pools of virtual and physical resources and allows their composition into arbitrary user-defined services."

Evolving from converged infrastructure ('data center in a box') solutions in the early 2000s, the SDDC was pioneered by public cloud infrastructure-as-a-service offerings based on containers and virtual machines, standardised OS images and automated provisioning. Gaps in the evolving SDDC model identified by Forrester included limited network virtualisation and legacy applications either resistant to virtualisation or required by their owners to run on dedicated physical resources. "To bridge this gap, vendors need a more integrated abstraction layer that includes physical resources. This is the opportunity inherent in the full definition of a physically inclusive SDDC," said the analyst firm.

Still, the value of the current SDDC approach is widely recognised. In a 2016 Forrester survey of 1,065 global infrastructure technology decision-makers, 38 percent reported that they had implemented (11%) or were implementing (27%) an SDDC, while a further 21 percent were planning to implement one within 12 months:

forrestersddc-adoption.png
Image: The Software-Defined Data Center Comes Of Age (Forrester)

"We're still at the early stages of the maturity life cycle, but I&O [infrastructure and operations] pros already see the value and have been covering the solution gaps with automation," said Forrester. "The challenge is that the market remains unclear about the future nature of the SDDC, although it's attracted to the promise to virtualize the data center to deliver velocity to the business."

Virtual and physical resource pools are the building blocks of the SDDC, whose primary interface is a service design layer that specifies the workloads and services that will run on it, said Forrester. Other components include an interface to allow deployment and administration of services, while deployment and runtime automation accommodates workload changes, high availability, disaster recovery operations and business continuity:

forrestersddc-architecture.png
Image: The Software-Defined Data Center Comes Of Age (Forrester)

The above diagram represents an idealised architecture, but Forrester noted that SDDC remains "a complex soup of technology and control -- and isn't standardised". Standard runtime components include: hypervisors and virtual machines from multiple vendors; containers (e.g. Docker) and their management environments (e.g. Kubernetes); private cloud solutions such as OpenStack; external storage systems and software-defined storage (SDS) overlays; and the ability to program underlying physical network components along with the virtual overlay network.

"Incorporating, presenting, and deploying dedicated physical objects with some level of identity virtualization are key capabilities missing in most traditional virtualized management environments. This weakness remains an architectural wall between SDDC products and legacy virtualized infrastructure management environments," said Forrester.

In the world of the software-defined data center, the 'engineer' will be more of a software developer, probably in a DevOps team, than a maintainer of physical IT infrastructure. Successful integration of existing converged infrastructure products requires the SDDC to expose an API that allows these systems to publish their capabilities and configurations and grant permission to the SDDC layer for selected management operations, said Forrester.

"The availability of an API for third-party developers is critical to efforts to correlate the visible workloads with the physical asset management layers, such as those embodied in modern data center infrastructure management (DCIM) solutions," the analyst firm added.

The next evolutionary stage is what Forrester calls a Composable Infrastructure System, or CIS, which should have the following characteristics, said the analyst firm:

    Composability. From a pool of shared physical resources (CPU and memory, physical storage, and network connections), the CIS software creates an abstracted physical system, which is then presented to the external environment.

    Strict equivalency. These composed systems are indistinguishable from a standard physical system from the perspective of all software -- both local and external.

    Noninterfering composition. These systems can be composed and decomposed, with resources returning to the pools without affecting any running systems.

    API-based and model-based management. Applications and end users can access the CIS management through a set of published APIs. Also, the definition of the desired state infrastructure is done in software models. Command line and GUI access is optional, but API access and software models represent the true future.

    hpe-synergy.jpg

    HPE Synergy: the first Composable Infrastructure System (CIS).

    Image: Hewlett-Packard Enterprise

    At the time of Forrester's report (November 2017), only one product met this definition of a CIS -- Synergy, from Hewlett-Packard Enterprise (HPE). However, the analyst firm noted that "the concept is so powerful and such a logical evolution that we're confident in predicting a wave of similar products either shipping or announcing in 2018 from Cisco, Dell EMC, and possibly Huawei." Sure enough, Dell announced its Kinetic Infrastructure at Dell EMC World in May -- a move that prompted an ironic welcome from HPE.

    "The advent of CIS and maturation of programmable infrastructure set the stage for a workable SDDC", said Forrester. "They allow easier integration of legacy applications that don't easily factor into virtualized infrastructure abstractions."

    Outlook

    The ideal situation for enterprise IT may be a 100 percent cloud-native application portfolio built using containers and microservices, with developers and operations teams having on-demand access to compute, networking and storage resources in the most appropriate locations for their mix of workloads. But most enterprises are still a long way off this ideal position, and the success of their digital transformation efforts may depend largely on how quickly and effectively they can modernise their applications, and house them alongside any remaining legacy apps in CIS-based software-defined data centers.

    MORE ON IT AUTOMATION

    Enterprise automation starts in IT departments, but gradually pays off elsewhere
    A survey of 705 executives released by Capgemini, finds enterprise-class automation is still a rarity, but taking root within IT functions.

    Looking to compare all different types of automation? Now you can
    Organizations must vet and compare different types of automation based on business needs, but there was no way to do this -- until now. Forrester's Chris Gardner shares how.

    What Kubernetes really is, and how orchestration redefines the data center
    In a little over four years' time, the project born from Google's internal container management efforts has upended the best-laid plans of VMware, Microsoft, Oracle, and every other would-be king of the data center. So just what is it that changed everything?

    True private cloud isn't dead: Here are the companies leading the charge(TechRepublic)
    Companies like VMware and Dell EMC are heading up the private cloud market, according to Wikibon research.

    Editorial standards