X
Business

What is edge computing? Here’s why the edge matters and where it’s headed

A few hundred small servers, scattered throughout the country and linked by fiber optic cable, should theoretically be capable of providing the same value to customers as one hyperscale data center. Just because something can be done, however, should it?
Written by Scott Fulton III, Contributor
Bright arrows moving forward towards blue screens.
metamorworks/Shutterstock

At the edge of any network, there are opportunities for positioning servers, processors, and data storage arrays as close as possible to those who can make best use of them. Where you can reduce the distance, the speed of electrons being essentially constant, you minimize latency. A network designed to be used at the edge leverages this minimal distance to expedite service and generate value.

In a modern communications network designed for use at the edge — for example, a 5G wireless network — there are two possible strategies at work:

  • Data streams, audio, and video may be received faster and with fewer pauses (preferably none at all) when servers are separated from their users by a minimum of intermediate routing points, or "hops." Content delivery networks (CDN) from providers such as Akamai, Cloudflare, and NTT Communications and are built around this strategy.
  • Applications may be expedited when their processors are stationed closer to where the data is collected. This is especially true for applications for logistics and large-scale manufacturing, as well as for the Internet of Things (IoT) where sensors or data collecting devices are numerous and highly distributed.

Depending on the application, when either or both edge strategies are employed, these servers may actually end up on one end of the network or the other. Because the Internet isn't built like the old telephone network, "closer" in terms of routing expediency is not necessarily closer in geographical distance. And depending upon how many different types of service providers your organization has contracted with — public cloud applications providers (SaaS), apps platform providers (PaaS), leased infrastructure providers (IaaS), content delivery networks — there may be multiple tracts of IT real estate vying to be "the edge" at any one time.

190227-schneider-edge-11-inside-a-micro-dc-cabinet.jpg

Inside a Schneider Electric micro data center cabinet

Scott Fulton

The current topology of enterprise networks

There are three places most enterprises tend to deploy and manage their own applications and services:

  • On-premises, where data centers house multiple racks of servers, where they're equipped with the resources needed to power and cool them, and where there's dedicated connectivity to outside resources
  • Colocation facilities, where customer equipment is hosted in a fully managed building where power, cooling, and connectivity are provided as services
  • Cloud service providers, where customer infrastructure may be virtualized to some extent, and services and applications are provided on a per-use basis, enabling operations to be accounted for as operational expenses rather than capital expenditures

The architects of edge computing would seek to add their design as a fourth category to this list: one that leverages the portability of smaller, containerized facilities with smaller, more modular servers, to reduce the distances between the processing point and the consumption point of functionality in the network. If their plans pan out, they seek to accomplish the following:

Potential benefits

  • Minimal latency. The problem with cloud computing services today is that they're slow, especially for artificial intelligence-enabled workloads. This essentially disqualifies the cloud for serious use in deterministic applications, such as real-time securities markets forecasting, autonomous vehicle piloting, and transportation traffic routing. Processors stationed in small data centers closer to where their processes will be used, could open up new markets for computing services that cloud providers haven't been able to address up to now. In an IoT scenario, where clusters of stand-alone, data-gathering appliances are widely distributed, having processors closer to even subgroups or clusters of those appliances could greatly improve processing time, making real-time analytics feasible on a much more granular level.
  • Simplified maintenance. For an enterprise that doesn't have much trouble dispatching a fleet of trucks or maintenance vehicles to field locations, micro data centers (µDC) are designed for maximum accessibility, modularity, and a reasonable degree of portability. They're compact enclosures, some small enough to fit in the back of a pickup truck, that can support just enough servers for hosting time-critical functions, that can be deployed closer to their users. Conceivably, for a building that presently houses, powers, and cools its data center assets in its basement, replacing that entire operation with three or four µDCs somewhere in the parking lot could actually be an improvement.
  • Cheaper cooling. For large data center complexes, the monthly cost of electricity used in cooling can easily exceed the cost of electricity used in processing. The ratio between the two is called power usage effectiveness (PUE). At times, this has been the baseline measure of data center efficiency (although in recent years, surveys have shown fewer IT operators know what this ratio actually means). Theoretically, it may cost a business less to cool and condition several smaller data center spaces than it does one big one. Plus, due to the peculiar ways in which some electricity service areas handle billing, the cost per kilowatt may go down across the board for the same server racks hosted in several small facilities rather than one big one. A 2017 white paper published by Schneider Electric [PDF] assessed all the major and minor costs associated with building traditional and micro data centers. While an enterprise could incur just under $7 million in capital expenses for building a traditional 1 MW facility, it would spend just over $4 million to facilitate 200 5 KW facilities.
  • Climate conscience. There has always been a certain ecological appeal to the idea of distributing computing power to customers across a broader geographical area, as opposed to centralizing that power in mammoth, hyperscale facilities, and relying upon high-bandwidth fiber optic links for connectivity.  The early marketing for edge computing relies upon listeners' common-sense impressions that smaller facilities consume less power, even collectively.  But the jury is still out as to whether that's actually true.  A 2018 study by researchers from the Technical University of Kosice, Slovakia [PDF], using simulated edge computing deployments in an IoT scenario, concluded that the energy effectiveness of edge depends almost entirely upon the accuracy and efficiency of computations conducted there.  The overhead incurred by inefficient computations, they found, would actually be magnified by bad programming.

If all this sounds like too complex a system to be feasible, keep in mind that in its present form, the public cloud computing model may not be sustainable long-term. That model would have subscribers continue to push applications, data streams, and content streams through pipes linked with hyperscale complexes whose service areas encompass entire states, provinces, and countries — a system that wireless voice providers would never dare have attempted.

Potential pitfalls

Nevertheless, a computing world entirely remade in the edge computing model is about as fantastic — and as remote — as a transportation world that's weaned itself entirely from petroleum fuels. In the near term, the edge computing model faces some significant obstacles, several of which will not be altogether easy to overcome:

  • Remote availability of three-phase power. Servers capable of providing cloud-like remote services to commercial customers, regardless of where they're located, need high-power processors and in-memory data, to enable multi-tenancy. Probably without exception, they'll require access to high-voltage, three-phase electricity. That's extremely difficult, if not impossible, to attain in relatively remote, rural locations. (Ordinary 120V AC current is single-phase.)  Telco base stations have never required this level of power up to now, and if they're never intended to be leveraged for multi-tenant commercial use, then they may never need three-phase power anyway. The only reason to retrofit the power system would be if edge computing is viable. But for widely distributed Internet-of-Things applications such as Mississippi's trials of remote heart monitors, a lack of sufficient power infrastructure could end up once again dividing the "have's" from the "have-not's."
  • Carving servers into protected virtual slices. For the 5G transition to be affordable, telcos must reap additional revenue from edge computing. What made the idea of tying edge computing evolution to 5G was the notion that commercial and operational functions could co-exist on the same servers — a concept introduced by Central Office Re-architected as a Datacenter (CORD) (originally "Re-imagined"), one form of which is now considered a key facilitator of 5G Wireless. Trouble is, it may not even be legal for operations fundamental to the telecommunications network to co-reside with customer functions on the same systems — the answers depend on whether lawmakers are capable of fathoming the new definition of "systems." Until that day (if it ever comes), 3GPP (the industry organization governing 5G standards) has adopted a concept called network slicing, which is a way to carve telco network servers into virtual servers at a very low level, with much greater separation than in a typical virtualization environment from, say, VMware. Conceivably, a customer-facing network slice could be deployed at the telco networks' edge, serving a limited number of customers. However, some larger enterprises would rather take charge of their own network slices, even if that means deploying them in their own facilities — moving the edge onto their premises — than invest in a new system whose value proposition is based largely on hope.
  • Telcos defending their home territories from local breakouts. If the 5G radio access network (RAN), and the fiber optic cables linked to it, are to be leveraged for commercial customer services, some type of gateway has to be in place to siphon off private customer traffic from telco traffic. The architecture for such a gateway already exists [PDF], and has been formally adopted by 3GPP. It's called local breakout, and it's also part of the ETSI standards body's official declaration of multi-access edge computing (MEC). So technically, this problem has been solved. Trouble is, certain telcos may have an interest in preventing the diversion of customer traffic away from the course it would normally take: into their own data centers. Today's Internet network topology has three tiers: Tier-1 service providers peer only with one another, whereas Tier-2 ISPs are typically customer-facing. The third tier allows for smaller, regional ISPs on a more local level. Edge computing on a global scale could become the catalyst for public cloud-style services, offered by ISPs on a local level, perhaps through a kind of "chain store." But that's assuming the telcos, who manage Tier-2, are willing to just let incoming network traffic be broken out into a third tier, enabling competition in a market they could very easily just claim for themselves.

If location, location, location matters again to the enterprise, then the entire enterprise computing market can be turned on its ear. The hyperscale, centralized, power-hungry nature of cloud data centers may end up working against them, as smaller, more nimble, more cost-effective operating models spring up — like dandelions, if all goes as planned — in more broadly distributed locations.

"I believe the interest in edge deployments," remarked Kurt Marko, principal of technology analysis firm Marko Insights, in a note to ZDNet, "is primarily driven by the need to process massive amounts of data generated by 'smart' devices, sensors, and users — particularly mobile/wireless users. Indeed, the data rates and throughput of 5G networks, along with the escalating data usage of customers, will require mobile base stations to become mini data centers."

What does "edge computing" mean?

In any telecommunications network, the edge is the furthest reach of its facilities and services towards its customers. In the context of edge computing, the edge is the location on the planet where servers may deliver functionality to customers most expediently.

How CDNs blazed the trail

edge-diagram-industrial-internet-consortium.jpg

Diagram of the relationship between data centers and Internet-of-Things devices, as depicted by the Industrial Internet Consortium.

With respect to the Internet, computing or processing is conducted by servers — components usually represented by a shape (for example, a cloud) near the center or focal point of a network diagram. Data is collected from devices at the edges of this diagram, and pulled toward the center for processing. Processed data, like oil from a refinery, is pumped back out toward the edge for delivery. CDNs expedite this process by acting as "filling stations" for users in their vicinity. The typical product lifecycle for network services involves this "round-trip" process, where data is effectively mined, shipped, refined, and shipped again. And, as in any process that involves logistics, transport takes time.

ntt-cdn-diagram.jpg

An accurate figurative placement of CDN servers in the data delivery process.

NTT Communictions

Importantly, whether the CDN always resides in the center of the diagram, depends on whose diagram you're looking at. If the CDN provider drew it up, there's may be a big "CDN" cloud in the center, with enterprise networks along the edges of one side, and user equipment devices along the other edges. One exception comes from NTT, whose simplified but more accurate diagram above shows CDN servers injecting themselves between the point of data access and users. From the perspective of the producers of data or content, as opposed to the delivery agents, CDNs reside toward the end of the supply chain — the next-to-last step for data before the user receives it.

Throughout the last decade, major CDN providers began introducing computing services that reside at the point of delivery. Imagine if a filling station could be its own refinery, and you get the idea. The value proposition for this service depends on CDNs being perceived not at the center, but the edge. It enables some data to bypass the need for transport, just to be processed and transported back.

The trend toward decentralization

If CDNs hadn't yet proven the effectiveness of edge computing as a service, they at least demonstrated the value of it as a business: Enterprises will pay premiums to have some data processed before it reaches the center, or "core," of the network.

"We've been on a pretty long period of centralization," explained Matt Baker, Dell Technologies' senior vice president for strategy and planning, during a press conference last February. "And as the world looks to deliver increasingly real-time digital experiences through their digital transformation initiatives, the ability to hold on to that highly centralized approach to IT is starting to fracture quite a bit."

Edge computing has been touted as one of the lucrative, new markets made feasible by 5G Wireless technology. For the global transition from 4G to 5G to be economically feasible for many telecommunications companies, the new generation must open up new, exploitable revenue channels. 5G requires a vast, new network of (ironically) wired, fiber optic connections to supply transmitters and base stations with instantaneous access to digital data (the backhaul). As a result, an opportunity arises for a new class of computing service providers to deploy multiple µDCs adjacent to radio access network (RAN) towers, perhaps next to, or sharing the same building with, telco base stations. These data centers could collectively offer cloud computing services to select customers at rates competitive with, and features comparable to, hyperscale cloud providers such as Amazon, Microsoft Azure, and Google Cloud Platform.

Ideally, perhaps after a decade or so of evolution, edge computing would bring fast services to customers as close as their nearest wireless base stations. We'd need massive fiber optic pipes to supply the necessary backhaul, but the revenue from edge computing services could conceivably fund their construction, enabling it to pay for itself.

Service-level objectives

In the final analysis (if, indeed, any analysis has ever been final), the success or failure of data centers at network edges will be determined by their ability to meet service-level objectives (SLO). These are the expectations of customers paying for services, as codified in their service contracts. Engineers have metrics they use to record and analyze the performance of network components. Customers tend to avoid those metrics, choosing instead to favor the observable performance of their applications. If an edge deployment isn't noticeably faster than a hyperscale deployment, then the edge as a concept may die in its infancy.

"What do we care about? It's application response time," explained Tom Gillis, VMware's senior vice president for networking and security, during a recent company conference.  "If we can characterize how the application responds, and look at the individual components working to deliver that application response, we can actually start to create that self-healing infrastructure."

The reduction of latency and the improvement of processing speed (with newer servers dedicated to far fewer tasks quantitatively) should play to the benefit of SLOs. Some have also pointed out how the wide distribution of resources over an area contribute to service redundancy and even business continuity — which, at least up until the pandemic, were perceived as one- or two-day events, followed by recovery periods.

But there will be balancing factors, the most important of which has to do with maintenance and upkeep. A typical Tier-2 data center facility can be maintained, in emergency circumstances (such as a pandemic) by as few as two people on-site, with support staff off-site. Meanwhile, a µDC is designed to function without being perpetually staffed. Its built-in monitoring functions continually send telemetry to a central hub, which theoretically could be in the public cloud. As long as a µDC is meeting its SLOs, it doesn't have to be personally attended.

Here is where the viability of the edge computing model has yet to be thoroughly tested. With a typical data center provider contract, an SLO is often measured by how quickly the provider's personnel can resolve an outstanding issue. Typically resolution times can remain low when personnel don't have to reach trouble points by truck. If an edge deployment model is to be competitive with a colocation deployment model, its automated remediation capabilities had better be freakishly good.

The tiered network

Data storage providers, cloud-native applications hosts, Internet of Things (IoT) service providers, server manufacturers, real estate investment trusts (REIT), and pre-assembled server enclosure manufacturers, are all paving express routes between their customers and what promises, for each of them, to be the edge.

What they're all really looking for is competitive advantage. The idea of an edge shines new hope on the prospects of premium service — a solid, justifiable reason for certain classes of service to command higher rates than others. If you've read or heard elsewhere that the edge could eventually subsume the whole cloud, you may understand now this wouldn't actually make much sense. If everything were premium, nothing would be premium.

"Edge computing is apparently going to be the perfect technology solution, and venture capitalists say it's going to be a multi-billion-dollar tech market," remarked Kevin Brown, CTO and senior vice president for innovation for data center service equipment provider, and micro data center chassis manufacturer, Schneider Electric. "Nobody actually knows what it is."

190227-schneider-edge-10-kevin-brown.jpg

Schneider Electric's Kevin Brown: "Nobody actually knows what it is."

Brown acknowledged that edge computing may attribute its history to the pioneering CDNs, such as Akamai. Still, he went on, "you've got all these different layers — HPE has their version, Cisco has theirs. . . We couldn't make sense of any of that. Our view of the edge is really taking a very simplified view. In the future, there's going to be three types of data centers in the world, that you really have to worry about."

The picture Brown drew, during a press event at the company's Massachusetts headquarters in February 2019, is a re-emerging view of a three-tiered Internet, and is shared by a growing number of technology firms. In the traditional two-tiered model, Tier-1 nodes are restricted to peering with other Tier-1 nodes, while Tier-2 nodes handle data distribution on a regional level. Since the Internet's beginning, there has been a designation for Tier-3 — for access at a much more local level. (Contrast this against the cellular Radio Access Network scheme, whose distribution of traffic is single-tiered.)

"The first point that you're connecting into the network, is really what we consider the local edge," explained Brown. Mapped onto today's technology, he went on, you might find one of today's edge computing facilities in any server shoved into a makeshift rack in a wiring closet.

"For our purposes," he went on, "we think that's where the action is."

"The edge, for years, was the Tier-1 carrier hotels like Equinix and CoreSite.  They would basically layer one network connecting to another, and that was considered an edge," explained Wen Temitim, CTO of edge infrastructure services provider StackPath.  "But what we're seeing, with all the different changes in usage based on consumer behavior, and with COVID-19 and working from home, is a new and deeper edge that's becoming more relevant with service providers."

Locating the edge on a map

Edge computing is an effort to bring quality of service (QoS) back into the discussion of data center architecture and services, as enterprises decide not just who will provide their services, but also where.

The "operational technology edge"

Data center equipment maker HPE — a major investor in edge computing — believes that the next giant leap in operations infrastructure will be coordinated and led by staff and contractors who may not have much, if any, personal investment or training in hardware and infrastructure — people who, up to now, have been largely tasked with maintenance, upkeep, and software support. Her company calls the purview for this class of personnel operational technology (OT). Unlike those who perceive IT and operations converging in one form or the other of "DevOps," HPE perceives three classes of edge computing customers. Not only will each of these classes, in its view, maintain its own edge computing platform, but the geography of these platforms will separate from one another, not converge, as this HPE diagram depicts.

190318-hpe-edge-definition-slide-for-nvidia.jpg
Courtesy HPE

Here, there are three distinct classes of customers, each of which HPE has apportioned its own segment of the edge at large. The OT class here refers to customers more likely to assign managers to edge computing who have less direct experience with IT, mainly because their main products are not information or communications itself. That class is apportioned an "OT edge." When an enterprise has more of a direct investment in information as an industry, or is largely dependent upon information as a component of its business, HPE attributes to it an "IT edge." In-between, for those businesses that are geographically dispersed and dependent upon logistics (where the information has a more logical component) and thus the Internet of Things, HPE gives it an "IoT edge."

Dell's tripartite network

dell-edge-core-cloud-diagram.jpg
Courtesy Dell Technologies

In 2017, Dell Technologies first offered its three-tier topology for the computing market at large, dividing it into "core," "cloud," and "edge." As this slide from an early Dell presentation indicates, this division seemed radically simple, at least at first: Any customer's IT assets could be divided, respectively, into 1) what it owns and maintains with its own staff; 2) what it delegates to a service provider and hires it to maintain; and 3) what it distributes beyond its home facilities into the field, to be maintained by operations professionals (who may or may not be outsourced).

In a November 2018 presentation for the Linux Foundation's Embedded Linux Conference Europe, CTO for IoT and Edge Computing Jason Shepherd made this simple case: As many networked devices and appliances are being planned for IoT, it will be technologically impossible to centralize their management, including if we enlist the public cloud.

"My wife and I have three cats," Shepherd told his audience. "We got larger storage capacities on our phones, so we could send cat videos back and forth.

181114-jason-shepherd-dell-technologies-linux-conference-europe.jpg
Linux Foundation video

"Cat videos explain the need for edge computing," he continued. "If I post one of my videos online, and it starts to get hits, I have to cache it on more servers, way back in the cloud. If it goes viral, then I have to move that content as close to the subscribers that I can get it to. As a telco, or as Netflix or whatever, the closest I can get is at the cloud edge — at the bottom of my cell towers, these key points on the Internet. This is the concept of MEC, Multi-access Edge Computing — bringing content closer to subscribers. Well now, if I have billions of connected cat callers out there, I've completely flipped the paradigm, and instead of things trying to pull down, I've got all these devices trying to push up. That makes you have to push the compute even further down."

The emerging 'edge cloud'

Since the world premiere of Shepherd's scared kitten, Dell's concept of the edge has hardened somewhat, from a nuanced assembly of layers to more of a basic decentralization ethic.

"We see the edge as really being defined not necessarily by a specific place or a specific technology," said Dell's Matt Baker last February.  "Instead, it's a complication to the existing deployment of IT in that, because we are increasingly decentralizing our IT environments, we're finding that we're putting IT infrastructure solutions, software, etc., into increasingly constrained environments.  A data center is a largely unconstrained environment; you build it to the specification that you like, you can cool it adequately, there's plenty of space.  But as we place more and more technology out into the world around us, to facilitate the delivery of these real-time digital experiences, we find ourselves in locations that are challenged in some way."

Campus networks, said Baker, include equipment that tends to be dusty and dirty, aside from having low-bandwidth connectivity.  Telco environments often include very short-depth racks requiring very high-density processor population.  And in the furthest locales on the map, there's a dearth of skilled IT labor, "which puts greater pressure on the ability to manage highly distributed environments in a hands-off, unmanned [manner]."

Nevertheless, it's incumbent upon a growing number of customers to process data closer to the point where it's first assessed or created, he argued.  That places the location of "the edge," circa 2020, at whatever point on the map where you'll find data, for lack of a better description, catching fire.

StackPath's Temitim believes that point to be an emerging concept called the edge cloud — effectively a virtual collection of multiple edge deployments in a single platform.  This platform would be marketed at first to multichannel video distributors (MVPDs, usually incumbent cable companies but also some telcos) looking to own their own distribution networks, and cut costs in the long term.  But as an additional revenue source, these providers could then offer public-cloud like services, such as SaaS applications or even virtual server hosting, on behalf of commercial clients.

Such an "edge cloud" market could compete directly against the world's mid-sized Tier-2 and Tier-3 data centers.  Since the operators of those facilities are typically premium customers of their respective regions' telcos, those telcos could perceive the edge cloud as a competitive threat to their own plans for 5G Wireless.  It truly is, as one edge infrastructure vendor put is, a "physical land grab."  And the grabbing has really just begun.

Learn more — From the CBS Interactive Network

Elsewhere

Editorial standards