Select Page

Commentators are trying to pick what’s next all the time, the next wave, the next hype cycle – digital and technology is moving so quickly with constant disruption this is getting harder and harder to do. I get sick of the technical dross or public / private / hybrid cloud palaver commentators love, so was thrilled to stumble on this article from IDG Predicting the End of Cloud Computing:

Here’s a prediction you don’t hear very often: The cloud computing market, as we know it, will be obsolete in a matter of years.

When we started OptimalBI 7 odd years ago we made a decision not to own any hardware, a no-brainer for us with no infrastructure to maintain, resolved most of our DR (disaster recovery) and BCP (business continuity plan) goals, massively more flexible as we scaled, grew, pivoted and changed, and significantly cheaper!

“Cloud” was still said with inverted comma’s back then, now even governments – the most conservative of organisations – have cloud adoption strategies. By the way – If you are still in the “what is cloud computing” camp this article provides a pretty good overview.

What’s next If Cloud Computing will be obsolete?

In a nutshell Cloud computing will change because of:

  • Decentralised computing – think Internet of Things on steroids – with millions and billions of micro sized computing devices everywhere, all collecting and processing data
  • Lots and Lots more data – everything we interact with in our lives will collect and use data

The article from IDG provides a great summation of an Andreesen Horowitz podcast from Peter Levine – well worth listening too (link below). His assessment seems logical and the natural outcome of an increasingly connected world with decisions needing to be made at source in real-time, removing the latency created by transferring data packets and calculation processing back to a centralised Cloud based backend.

The End of Cloud Computing

Take a self-driving car as an example. It needs to be able to identify a stop sign or a pedestrian and act on that information instantaneously. It can’t wait for a network connection to the cloud to tell it what to do.

In concluding that very logical argument he does however confirm there will still be a need for centralised processing – where prediction, artificial intelligence, machine learning and other algorithms can be applied across the broader dataset. Making Cloud computing not obsolete just differently utilised.

Another way of describing the next evolution of Cloud Computing could be the increase of “Micro data centers deployed near workloads, serving as repositories for high-demand content and providing low latency for content and IoT data” as described on Data Centre Frontier.

Gartner (the commentators of commentators) support this perspective and go on to talk about the data management “overload” we are yet to tackle as a result:

“With the highly distributed architectures required for most IoT solutions (many things, many places where data is generated, many platforms on which data is processed, and many consumption points to which data must be delivered), the historical approach to centralized collection of data is under pressure. Organizations must support a more distributed data architecture, because IoT solutions are inherently distributed.”

What does lots and lots of data really mean?

Other people have done some great calculations as quantification of the impact the Internet of Things (IoT) might hold, here are some of my favourites with links to the authors:

Today the average household creates enough data to fill 65 iPhones (32gb) per year. 2020 this will grow to 318 iPhones. 

The Internet of Things will generate a staggering 400 zettabytes (ZB) of data a year by 2018, according to a report from Cisco. A zettabyte is a trillion gigabytes.

Put another way, approximately 4,800 devices are being connected to the network as we speak. Ten years from now, the figure will mushroom to 152,000 a minute.

Even the most conservative prediction – Gartner’s 20.8 billion connected things by 2020 – is predicated on a steady 30% annual growth. Cisco’s oft-reported 50 billion connected things is dependent on linking up “tires, roads, cars, supermarket shelves, and yes, even cattle” by 2020, according to the company’s blog. No one can know if either of these things will happen.

The last point is worthy highlighting – in researching this blog I found many many articles full of vastly differing predictions – to be honest I almost wrote a blog about how many wildly contradicting predictions can be found. Few commentators quantified what scope they included (connected device wise) in their IoT device numbers or resulting data volumes. Cisco listing tires, roads, cars, supermarket shelves and even cattle might help put this whole gambit into perspective – everything will have a chip or processor and all of those will be collecting data, many of them processing and / or issuing operational instructions.

Whatever the numbers we are well on the way there, I type this wearing my Fitbit which shows me text messages and reminds me about meetings as well as getting up to move around; I am listening to a stereo in another room streaming music via my phone’s Spotify app; and just read about a fridge recommended to me with a built in touch screen; all of which involve Cloud deployed software services. Imagine if this was all open data we could consume!

The last word today is bought to us by the What and the Why, their prediction from 2015! Enjoy, Vic.

Victoria MacLennan is a reformed techo from the data and information management world – who now focuses on creating jobs and opportunities.  She is passionate about Open Data, Data Privacy and Governance so will blog on those topics occasionally. Want to read more? Try ‘Open data maturiy model’ or more from Victoria.

We run regular Defining Data Requirements courses to bridge the technical-business divide when gathering data requirements.

 

 

 

 

 

 

%d bloggers like this: