What is cloud computing? What is edge computing? How are these related to the Internet of Things (IoT)? I will attempt to address this, along with work load distribution among these resources. Both computing models have gained popularity due to their scalability, affordability, and security offerings; essentially, they offer flexible and elastic on-demand access to a shared pool of computing resources.

At a high level the Edge offers the same resources as the Cloud, except that those resources are located much closer to the end device requesting them. Whereas the Cloud relies on farms of servers located at regional data centers, which proliferated in the late 1990s, the Edge taps into clusters of servers distributed across metropolitan data centers, and they are now just starting to build momentum. Actually, the origin of Edge Computing was in the 1990s, when it was employed for static content delivery for efficient download of cached images and videos.

Today, however, Edge Computing, is setting the stage for dynamic content delivery, which necessitates a scaled build-out to address the computing demands of recent technology trends in data science (i.e., big data, ML, AI, Neural Networks, etc.). For example, an online game where the content changes based on user interaction, determines what is delivered to the player. Dynamic content delivery is still possible with Cloud Computing, and it is being done today, however, it comes at a cost in data transmission latency due to the physical distance between the end user and the regional data centers. Ultimately, this affects the quality of gaming experience between the two computing technologies. Move that to the Edge and you get a richer experience for content delivery (e.g., many players can play in a high-resolution format with augmented or virtual reality platforms).

Internet of Things or IoT is the network of physical devices embedded with hardware and software components needed to interact with the environment in which they operate. For a device to be an IoT, it must also have the ability to communicate with other devices (this can be through wired or wireless technologies). See my other article “The (I)IoT Tsunami on the Internet” here.

Cloud Computing

There are three primary cloud service models that have been prominent since the Cloud started to build traction:

  • Infrastructure-as-a-Service (IaaS)
  • Platform-as-a-Service (PaaS)
  • Software-as-a-Service (SaaS)

IaaS is the foundational layer where businesses can access computing resources virtually over the cloud. In this service model, a customer can provision data storage, computing servers, networking hardware, load balancers, firewalls, and other resources, including maintenance and support, for running applications and operating systems.

PaaS builds on top of IaaS to offer middleware, operating systems, databases, development tools, business analytics, and more to support the complete web-based application development lifecycle (i.e., building, testing, deploying, managing, and updating). Essentially, this is like leasing a computer with all the resources needed to start developing your application over the cloud.

SaaS builds on top of PaaS to offer cloud-based web applications ready for end customer use. Developers control the entire computing stack in the IaaS and PaaS layers, and these resources are seamless for the clients using the application through a web browser. Such applications are run on the cloud and come with a tiered pricing structure from free for basic services to premium cost for value-added capabilities.

IoT devices can leverage these resources for cloud-based computing. Either the device manufacturer will use these service models to create Application Programming Interfaces (APIs) for their devices, or a third-party developer will. Ultimately, the APIs will be the gateway to interacting with the smart devices deployed in the field. These devices could be programmed to interact with each other based on security and privilege settings, as well as accessibility to their networks (i.e., VPNs).

Edge Computing

The aforementioned cloud service models are all applicable to Edge Computing. Since this type of computing is starting to take off, it remains to be seen how Edge services will evolve over time as new technologies, like ML and AI, flourish. Ultimately, the goal is to provision compute, data storage, and network resources at the Edge to promote Cloud capabilities.

Cloud Computing has a centralized architecture with their servers located at large regional data centers. Edge Computing, due to its decentralized architecture, has computing resources at smaller data centers in varied locations distributed throughout metropolitan areas. Edge Computing can be divided into three categories, in general:

  • Infrastructure Edge
  • Device Edge
  • Mobile Edge

Infrastructure Edge is the edge computing capability deployed on the operator side of the last-mile network. Compute, data storage, and network resources at this edge can be provisioned with elasticity. Since, the servers are in close proximity to the client devices, latency and data transport costs are drastically reduced. Data privacy and security concerns are also maintained, as subscribers can programmatically control what data is sent to the Cloud.

Device Edge is the edge computing capability on the end user side of the last-mile network; typically, this is reliant on an IoT Gateway to collect and process data from IoT devices. Compute, data storage, and network resources at this edge can be provisioned with elasticity. Since, the server (IoT Gateway) is on-premises near the client devices, latency and data transport costs are virtually eliminated. Data privacy and security concerns are also maintained, as subscribers can programmatically control what data is sent to the Infrastructure Edge. Device Edge may also use compute and data storage capability from user devices (e.g., smartphones, tablets, laptops, etc.) to process Edge Computing workloads; however, this will most likely be rigid (i.e., non-elastic).

Mobile Edge is somewhat nebulous as it combines Infrastructure Edge, Device Edge, and Network Slicing capabilities adjusted for particular use cases, like in-vehicle entertainment, real-time autonomous vehicle control, or cellular vehicle-to-everything communication (C-V2X). These applications, due to their mobile nature, require high-bandwidth, low-latency, and seamless reliability for proper functioning. Network Slicing uses a common physical hardware to run multiple virtual networks for different apps and services.

Cloud or Edge?

As mentioned earlier scalability, affordability, and security have prompted businesses to adopt cloud services. Scalability allows businesses to provision computing resources, up or down, based on need. And, this makes the service affordable as capital is not expended on excess capacity; furthermore, human resources are not allocated to address hardware upgrades and maintenance. Hence, businesses can focus on their core competencies.

Generally, data in the cloud is encrypted and secure. But another concern is privacy. Since Edge Computing is closer to the end clients, sensitive data can be programmatically controlled for processing at the Edge; thereby, reducing the risk of liberating private data on the internet to the Cloud. Security and privacy can be further enhanced by establishing an on-premises Edge (i.e., Device Edge) solution as well.

Not all applications will be practical for the centralized Cloud Computing architecture. Among the factors affecting this decision will be the requirement for low latency, high-volume data transfers, local data creation and consumption, and regulatory constraints. Two primary application types are worth mentioning here:

  • Edge-Enhanced Applications
  • Edge-Native Applications

Edge-Enhanced Applications are those that improve in performance when operated at the Edge, or can handle more functionality when operated at the Edge. Excess latency will not cause any failures on these applications. For example, image transfers or bulk data processing will succeed on either Edge or Cloud, but overall processing times will be greater for the latter. Existing Cloud applications can be migrated to the Edge data centers with or without changes to leverage these benefits.

Edge-Native Applications are those that are practical to operate at the Edge only. Excess latency will cause failures on these applications when operated at the Cloud. For example, real-time tasks for supporting an autonomous vehicle will result in catastrophic failures if the required data processing has to travel long distances to reach the Cloud processing point. New or innovative Edge applications can use the Edge data centers to filter large volumes of data to the Cloud, support real-time decision making, and address data sovereignty issues where raw data must remain in the locality in which they were generated.

Finally, applications can be written to take advantage of some or all of the available computing resources, and this may be a requirement depending on the application. For example, some of the sensor data from a windmill farm could be filtered before sending them to the Infrastructure Edge for running ML algorithms; subsequently, resultant data can be sent back to the windmill farm IoT Gateway (at the on-premises Device Edge side) for control action, as well as sent forward to the Cloud for historical purposes, and maybe even consumed at the Infrastructure Edge for AI.

Conclusion

Cloud technologies have been around for some time, and have been used with IoT devices for rendering services, gathering data, etc. Their primary benefits include scalability, affordability, and security. Their elasticity in provisioning on-demand computing resources is a key motivator for widespread adoption.

Cloud Computing has three primary service models, IaaS, PaaS, and SaaS. Among these SaaS is most familiar with businesses worldwide, and, therefore, also the most mature.

Edge Computing can be divided into three categories, Infrastructure Edge, Device Edge, and Mobile Edge. Among these Mobile Edge is the most complex for implementation.

A decision on whether to use Cloud Computing or Edge Computing depends on the application. For applications where the content delivered to the end client requires low latency between the device and the cloud, Edge Computing is the best option.