I am just back from a holiday during which I had a very interesting conversation about cloud computing.
I was chatting with a man who develops software for government election tracking and I got curious about the peak loads his systems would need to handle. He revealed that his system needs to be set up, tested, and then run for just one day at peak load – this then tapers into nothing in just a few days. I asked him what the cost of building such infrastructure was, and he coolly replied that it was about $3 million of infrastructure that gets upgraded every couple of years to run faster.
The question I had to ask was, “Have you considered using a cloud server to handle the peak load traffic and reduce the need for your own infrastructure?”
The answer was a very quick yes. And did it work well? Yes again.
So what was the cost of using the cloud solution? The answer shocked me so thoroughly that I don’t even recall the exact figure! But it was less than $1000. He then apologised and corrected himself as it would actually be cheaper than that; $1000 was inclusive of the extra servers that generate the simulated traffic load.
This left me wondering why any business that does occasional processing of large amounts of data is not rapidly adopting cloud solutions. With a $3 million capital cost reduction you can afford to design a lot of fail-over systems to utilise more than one hosted server offering at under $1000 for the peak load solution.
I believe in 2012 we will see a lot of idle servers made redundant, which has to be a good thing as it reduces the cost of creation, eliminates power consumption, and removes the IT waste of destruction at the end of useful life.
We are truly entering an age of high performance, high availability, high productivity, and high efficiency in IT. What are you doing to make this work for your business?