To succeed in edge compute, telcos will need to decouple access from cloud connectivity
To make the case for telco edge compute, we tend to focus on applications that require very low latency. This focuses on 5G and edge compute's ability to deliver millisecond round trip times that enable futuristic, headline-grabbing examples such as self-flying drones or AR-assisted repairs.
However, there is another group of more mundane requirements that edge compute supports: high-throughput data ingest and processing. In exploring edge compute, less attention has been given to reducing network loads by moving processing from hyperscale cloud to the edge of a telco’s network. This is sometimes mentioned as a potential cost saving or internal efficiency; a marginal side-benefit for the carrier.
Near-term use cases relating to reduced backhaul already exist. For example, with applications that generate large volumes of data to be analysed quickly (e.g. high definition video), processing and abstraction near the data source can streamline the ingest and ensure only limited amounts of relevant information (e.g. footfall stats, facial recognition match, potential road hazard such as a pet crossing a road) are sent over the network.
For example:
- A camera generates raw data at 10 Gbps. That's 5 TB of data an hour.
- The actual information required from the camera (for example an incident alert) may only represent a 10 GB file that is generated once a week.
- If the analytics were performed in the core cloud, the network would need to continuously deliver 10 Gbps of end-to-end connectivity. Although theoretically possible with 5G, this would have implications. Many cameras would mean more backhaul capacity and more cloud connectivity. The core would also need to process vast amounts of data.
- In practice, the preference would be to move some of the analytics much closer to the camera, thereby dramatically reduce the volume of data being sent over the network to the core cloud. This edge compute could be on the network (in which case the large volume of data would still need to be supported over the access network, and the much smaller data set transferred over the backhaul).
Telcos need to see reduced backhaul costs as a key benefit of telco edge computing alongside low latency and other factors (localised autonomy, resilience and sovereignty). However, this cost-saving needs to be shared with the application provider (which could be a third party or another division within the operator). This in turn means separating out access from backhaul connectivity; thereby incentivising application providers to run as much of their applications on the edge rather than in a centralised cloud or off-net.
The graph below shows the cost breakdown (from an application customer's perspective) for a data ingest application (using illustrative numbers). It separates mobile connectivity costs out into access and backhaul, (though this is not how telcos currently charge for connectivity) and sets out three types of compute (telco edge, other customer edge/off-net and core cloud). For application architectures are set out:
A. All data uploaded to central cloud indiscriminately – no processing or filtering of data at/near the data source
B. Processing and filtering of data on the telco edge but with connectivity charged at full rate (as if end-to-end), despite the full data being carried over access to the telco edge
C. Processing and filtering of data on (potentially non-telco) customer edge compute infrastructure. Telco sees connectivity demand after this process (priced as end-to-end connectivity, but with less data to move)
D. Processing and filtering of data on the telco edge with connectivity charged according to how far the data travels (decoupled pricing)
Simply charging the application provider for the extra edge computing without a offering a reduction in connectivity charges (scenario B) may appear as a higher revenue opportunity for the operator (it is), but the application provider will be incentivised to pursue solutions where the compute occurs on the customer edge, potentially leaving even less revenue for the telco (as in scenario C). Though the computing costs are higher for local infrastructure in Scenario C, there is less data to upload to the cloud by the time the application is using (and paying for) telco connectivity.
The ubiquity, flexibility and rapid scalability of telco edge computing will help with its adoption over off-net edge computing, but these advantages also apply to (cheaper) core cloud. Telco edge compute is not the default (proven) option. It is the challenger and needs all the help it can get, including a compelling economic case, to succeed.