Skip to content

Cloudy Edge Or Edgy Cloud?

5 strategies for leveraging edge computing for enterprise applications

Every discussion of edge computing ought to start with a definition, because to date, just as with the early conversations about the cloud, there is no collective agreement on what, exactly, edge computing is. Some people think “the edge” is smartphones, some people think it’s raspberry pi devices, and some people insist it’s a server in a manufacturing facility or at the base of a cell tower. 

kinetica
AGTO NUGROHO / UNSPLASH.COM

And that’s ok! All of these definitions are accurate, because the edge can mean different things for different use cases: edge computing is important in everything from real-time personalization to the 5G rollout. Captured in this conversation with VentureBeat, enterprises are determining how they will respond to IoT and AI at the edge now. 

In fact, of the $500 billion in growth expected for IoT through 2020, McKinsey estimates that about 25 percent will be directly related to edge technology. So instead of focusing on the edge form factor, it’s important to understand the role it plays in your system and define clear objectives and boundaries around it. With that in mind, let’s begin with a definition for the purposes of this piece.

Definition: Edge computing broadly means processing the data close to where the end user is. 

Why is it important to process data close to the end user? Typically, this is driven by latency requirements (sometimes you can’t afford to wait for a round-trip to the cloud) or data volume (where it’s impractical to send all of the data to the cloud from a cost or bandwidth perspective). 

To successfully implement edge computing, by our definition, I recommend the following:

Five Strategies for Computing at the Edge: 

  1. Focus on the application use cases
  2. Understand your options
  3. Make explicit decisions about security, privacy, and governance
  4. Develop the right data and machine learning strategy
  5. Be prepared to learn and adapt

1. Focus on application use cases

First and foremost, consider your application use cases: What are the types of experiences you would want to deliver if you weren’t constrained by the size of the data set and by latency? For example, autonomous cars need to be able to respond to driving obstacles as they appear, and they can also deliver a personalized experience as the journey progresses, suggesting routes in line with learned scenery or traffic preferences. Edge computing allows you to leverage data to deliver better customer experiences or optimize your operations — or both! Once you envision your application roadmap, try to understand the latency and data storage and processing capabilities you will need to deliver those experiences. Think: how fast do I need to make the decision, how much data do I need to make it, and how much compute capacity do I need to run those calculations? Once you know what you want to achieve, you can begin to consider your options and determine just how much of the experience wish list you can plausibly check off.

2. Understand your options

Once you understand your application’s success metrics, then you can think about what edge computing options you have and the tradeoffs between them. For example, in a connected factory, your tradeoff may be around building more intelligence into the sensors versus investing in server capacity within the factory for aggregation and processing. Understanding the business models associated with each as well as their business impacts—from capex versus opex to upgrade cycles to flexibility—is really important. OVO, a leading digital payments provider in Indonesia, did this very effectively with the roll out of their SmartCube vending machines captured in this video

3. Develop the right data and machine learning strategy

A lot of digital experiences will rely on machine learning (ML) to provide personalization, but ML is only as good as your data strategy. If you have distributed data and deployment, then how do you make sure your models are performing as expected? You must monitor model performance in order to determine effectiveness. Unlike more traditional distributed systems where you ran static algorithms, machine learning is highly dependent on data, and accuracy can change over time. Once you drift, you will need to re-train the model, which will require either getting the data back to the cloud for training, or implementing the infrastructure at the edge for distributed training approaches where possible. Fortunately, many more techniques are emerging in this area. 

4. Make explicit decisions about security, privacy, and governance

You will have to make specific trade-offs when it comes to security, privacy, and governance. What data must stay on the device and what goes to the cloud is very important and must be considered carefully. For example, you may know that a user picks up their kids every day at a particular time, but once you upload that data to the cloud, you want to anonymize it so that you don’t compromise the privacy of that behavior. Keeping the data at the edge makes it more secure because it doesn’t transfer across the network to the cloud. Similar conscious decisions must be made about governance. Once you have a distributed architecture where you make some decisions centrally and some locally, you will need to know what decisions were made and why they were made.

5. Be prepared to learn and adapt

Architectures change. Don’t think of it as “set once and forget.” Think of it as an evolution, where you can swap out as needed. Sweeping change like 5G will come, and of course, hardware will always evolve: don’t lock yourself into a single solution. Instead, build the right application architecture, data architecture, and ML functionality, and assume the hardware options will evolve over time. 

Some things in life are unpredictable: the future, the weather, the market. But I can say for sure that the next few years are going to be cloudy with a chance of edge computing. McKinsey predicts that over the next 5 to 7 years, there will be more than 100 edge computing use cases across 11 sectors. Facing both cloud computing and edge computing, your business needs to be able to analyze and react to data simultaneously, in real time: a process called active analytics. I am confident that with these five strategies in hand, you will find the active analytics platform that meets this dual challenge and allows your business to grow and thrive as edge and cloud computing continue to evolve.

Irina Farooq is chief product officer at Kinetica. You can follow her on Twitter @IrinaFarooq. This article was originally published on Forbes.

MIT Technology Review

Making Sense of Sensor Data

Businesses can harness sensor and machine data with the generative AI speed layer.
Download the report