A “head-in-the-clouds” strategy for IoT is partial, at best.
By Adrian Hillier
Architects have increasingly powerful tools for building edge-centric IoT systems at their disposal, which could deliver better IoT outcomes and lower operating costs. But all too often the default assumption is that cloud-centred designs are best. Adrian Hillier argues that architects must embrace the edge to enhance the performance of their IoT systems in a way that is intelligent and scalable.
Probably the most often conjured image of an IoT system is one of wholly cloud-based computing entities extracting valuable business insight from a multitude of unidirectional sensor streams.
This cloud-centric view of an IoT system is attractive for several reasons:
- Cloud-centric intelligence gives full visibility of past and present system behaviours, satisfying our desire to retain maximum flexibility and control. Raw sensor data can be stored for post-analysis and offline modelling. Digital twinning enables the continuous optimisation of a system by digitally emulating the behaviour of its physical assets.
- By maximising cloud intelligence, the IoT devices can remain simple and need very few hard-coded decisions. This means they can be largely off-the-shelf components that are easy to validate, test and deploy, helping to avoid vendor lock-in.
- Since the ownership of edge, cloud and data assets is often fragmented, it is operationally simpler to share data insights in the cloud as a backend integration, rather than doing so at the edge.
Simplicity and control are key motivators behind a cloud-centric approach, but is this always optimal?
Value of data versus the cost of collecting it
‘Data lakes’ can grow to 100TB at a surprisingly fast rate, with associated transport, storage and processing costs rising in tandem. But their latent potential to derive actionable business insight from data does not necessarily follow the same linear trajectory.
Low ratios of “latent value per MB” tend to imply lower operating margins (since somebody, somewhere has usually paid to move those MB around). Under these circumstances, the quality of data is paramount and edge computing can go some way to help reduce running costs.
Enter the world of machine learning at the edge.
More intelligent edge devices throttle data backhaul and cloud storage by making autonomous decisions in situ. Neural networks can be partitioned across edge and cloud compute systems, causing IoT sensors to backhaul higher-order statistical data sets, instead of filling ‘data lakes’ with raw numbers.
Autonomy can be progressively ramped up as an IoT system matures. Perhaps the analogy with a rookie employee is apposite. In the very early days, supervisors invest higher bandwidth management resources to train-up a new team member, gradually decaying to an optimum as the skill and autonomy of the workforce grows.
Free computing resource at the edge?
Moore’s Law has driven up CPU performance in the lowliest of devices to the point where substantial computing tasks can be offloaded from the cloud to the edge. Leading CPU companies like Arm continue the march by adding advanced machine learning features across their portfolio. Meanwhile, software frameworks from Microsoft and Amazon have vastly simplified the job of building and hosting intelligent IoT applications at the edge.
These low-cost hardware and software platforms allow IoT systems to be optimised through the intelligent re-partitioning of cloud-centric and edge-centric compute subsystems.
Less is more: edge computing for battery powered devices
The following hypothetical example shows how a large population of simple battery-powered field sensors can benefit from edge computing.
These IoT nodes were configured to backhaul 20 x 100 Byte payloads every day. A 10-year battery life is possible for sensors located within 1km of the gateway, but any nodes sitting outside a 5km radius will struggle to manage 4 years on a single charge.
Obvious design-time remedies might be (i) using a bigger battery, or (ii) specifying an in-field servicing regime to replace them.
A more intelligent approach uses cloud-based supervised learning in year 1 to optimise the behaviour of the nodes ahead of the in-field firmware maintenance update planned for year 2.
From year 2 onwards, the IoT nodes are re-configured to operate a federated learning regime, causing them to backhaul less data and consume less power. This tweak extends in-field battery life to 10 years. In fact, the firmware is updated once more during year 5 (note the small but discernible energy cost when that happens).
A strong mandate for computing at the edge
Some IoT systems (e.g. autonomous driving) demand a degree of resilience to patchy communications coverage, which can only be satisfied by distributed compute systems. But whatever the business driver, it seems likely that architects will increasingly embrace the edge to enhance the performance of their connected systems. It is even possible that a default preference for cloud-centric computing may carry competitive risks.
A ‘head-in-the-clouds’ strategy for IoT is a partial one, at best.