There’s no doubt that the volume of ‘things’ connected to the internet is increasing dramatically. Cisco says there will be 25 billion things online by 2015; IBM says one trillion. Either way, the proportion of things to humans will increase sharply over the coming years.
This raises a question about the impact of the internet of things/internet of everything on the overall volume of internet traffic – should we get ready for a massive spike in data transmissions over the next couple of years?
Before we get too worried, it’s worth remembering that internet traffic has increased by several orders of magnitude since it’s early days, mainly thanks to the success of the WWW and the digital content it made available. People have worried regularly about how the network would handle this in the past, but its always managed to adapt and evolve. In 1995, for instance, Robert Metcalfe, inventor of Ethernet networking and founder of 3Com predicted that the Internet: “…will soon go spectacularly supernova and in 1996 catastrophically collapse.”
In other words, rumors of the Internet’s death through data overload have been greatly exaggerated in the past and are likely to be again. Let me explain why.
First, the fact that more things are connected and ‘sensing’ doesn’t mean they will necessarily be sending messages and notifications at all times. On the contrary, it will become increasingly important – and technically possible – to do most of the computation on the physical devices themselves and transmit only a fraction of the data. As the things in the Internet of Things become smarter, it means they will think more and talk less — “things that think before they speak”. They’ll do this by analyzing the data they collect and making their own decisions to communicate only essential, urgent or relevant information, not simply act as dumb sensors feeding raw data into the network.
Second, the kind of data sent by most devices is pretty simple: a few characters of text data, or an image every few minutes. This represents less than the average amount of data transmitted by any single Web request. At that rate, the data sent by a device over its whole lifetime won’t reach the volume of data transmitted by the average Internet user in an hour. And given that watching data-rich video represents the lion share of Internet usage today, even with 1000 times more devices connected to the net, the data they transmit will be barely noticeable compared with two billion Web users streaming millions of hours of video in aggregate every single day.
Finally, progress in mobile communications technology (increasing bandwidth, smaller and more efficient chips), along with progress in hardware and software like parallel computing (multi-cores, data-centers, etc) and Software-As-A-Service, will make it more efficient to outsource a lot of computation to the Cloud. Therefore, data from devices could be directly sent to a server farm to be processed and filtered before being pushed over the Web.
And the volume of traffic and data is not going to be an issue in terms of bandwidth costs. The trend for the past 30 years on networks has been a consistent decline in cost per byte transferred and stored — there’s no reason this won’t continue to be the case.
The real issue in the next years will really be extracting meaningful information nuggets from the massive quantity of raw data generated by the Internet of Things. This will only be an issue if we don’t apply tools to process and interpret this data where it makes most sense. Companies should be making investments in analytics software, data visualization and similar tools – which are very affordable now for SMEs and not just the province of big businesses – so the enormous amounts of information that can now be gathered doesn’t become overwhelming but an opportunity to do business in an even smarter way.