An edge computing layer runs anywhere between the devices layer and the cloud layer.
Here are six forms of edge computing that cover the whole spectrum spanning the devices to the cloud:
The micro edge is the most recent incarnation of the edge computing layer. When a microcontroller is capable of running a TinyML AI model, it qualifies to be a micro-edge computing device. In this use case, the sensors connected to the microcontroller generate the telemetry stream that is used by a deep learning model for inference. Unlike other scenarios where the microcontroller collects the telemetry and ingests into an edge computing layer, this type of edge runs within the context of a microcontroller and a microprocessor.
The mini edge is based on a single board computer built on either ARM64 and AMD64 architecture. It is typically powered by an AI accelerator to speed up the inference. It’s also capable of running a full-blown operating system such as Linux or Microsoft Windows. Mini edge comes with a software stack associated with the AI accelerator. These types of edge devices are ideal for protocol translation, data aggregation, and AI inference.
The medium edge deployment model represents a cluster of inexpensive machines running at the edge computing layer. The compute cluster is powered by an internal Graphics Processing Unit (GPU), Field Programmable Gate Arrays (FPGA), Vision Processing Unit (VPU), or an Application Specific Integrated Circuit (ASIC). A cluster manager like Kubernetes is used for orchestration of the workloads and resources in the clusters.
The heavy edge computing device is typically a hyperconverged infrastructure (HCI) appliance that runs within an enterprise data center. It comes with an all-in-one hardware and software stack typically managed by a vendor. Heavy edge demands power and network resources that are available only in an environment like an enterprise data center.
Heavy edge doubles up as an IoT gateway, storage gateway, AI training and inference platform. It comes with an array of GPUs or FPGAs designed for managing end-to-end machine learning pipelines including the training and deployment of models.
Multi-Access Edge Computing (MEC) moves the computing of traffic and services from a centralized cloud to the edge of the network and closer to the customer. With 5G becoming a reality, MEC is becoming the intermediary layer between the consumers and providers of the public cloud.
With MEC, the edge infrastructure runs within the facility of a telecom provider co-located within a data center or even a cellular tower. This is delivered via a managed services either by the telco company or a public cloud provider.
Cloud Edge does what CDN did to static content but for dynamic workloads. It enables distributing components of an application across multiple endpoints to reduce the latency involved in the roundtrip. Cloud Edge relies on modern application development paradigms such as containers and microservices to distribute the workload. Static content and stateless components of an application are replicated and cached across the global network.
The cloud edge providers may support AI acceleration as an optional feature. Since it is delivered as a managed service, customers won’t have to deal with the hardware and software maintenance.
The definition of edge computing and the ecosystem are rapidly evolving to meet the demands of enterprise customers.