The recent media is abuzz with cloud computing in all its forms and permutations. And it has become apparent that, to some degree, each and every one of us will sooner or later be faced with the decision of what cloud services to use, and when to adopt this promising architecture.
If everyone promises cheaper services, more up-time, and most importantly, a reduction in infrastructure costs when using cloud computing, what should the CIO in your company be looking at in preparation for this latest wonder of our IT world?
The answer lies in bottle-necks and redundancy. If I were to build a new house and be completely dependent on rainwater for all my requirements, the first thing I would do is to make sure that there are no bottle-necks in the transfer of water from the catchment area into some form of reservoir. From there, I would have to establish a redundancy system of new, large-diameter water pipes to those areas of my household that will be needing access to the water. In other words – fix up your gutters and increase the size of your water pipes. Remember, in order to run my household, I cannot go without water, and neither do I have my own borehole anymore. (I sold it to buy the new water pipes). So yes, tapping resources from an external provider is definitely more cost effective if you are starting out a new house. Converting an existing household’s water supply to accommodate this new network may require some noticeable investment.
Cloud computing is no different. Companies surrender their own boreholes
(client-based software licenses and services) in order to make use of a resource feed from within the ‘cloud’. As such, the days of using large power-hungry servers in an expensively run air-conditioned room may seem to be numbered. But, as with boreholes, you move the weakness from your own infrastructure to that of the service delivery mechanism. In this instance, your network supply.
Setting up your company with quality network infrastructure, from your local area networks, to your switching systems, domain servers, and routers, is a non-negotiable before committing to cloud services. Similar to tapping water resources from a catchment area via high quality – high capacity gutters, you have to concentrate on redundancy and distribution.
At the moment, there are a number of public cloud solutions that do not allow for online-offline redundancy. In other words, with the exception of a few experimental frameworks – including Google Gears – should one perform mission critical applications on a cloud service, and the connection to the cloud goes offline, you would need to resume on a local instance of that server until the network has been re-connected. And even if this was possible, is it feasible?
Why go to the cloud in the first place if you have to fork out some serious computing power to run the service locally?
So to summarise; initially, one should not see the utilisation of public cloud services as a money saver. Use that infrastructure savings (servers, software licenses, electricity, etc) to spend on upgrading your networking system.
This should span your feed from the Internet right through to the network adapters on your PCs.
One final point on the decision of when to adopt cloud computing: Until your infrastructure has proven itself, limit the usage to services that you can do without for short periods of time. I would be happy to run my mail server in the cloud, but mission critical services which may include accounting, CRM, ERP, etc, will have to wait until I am satisfied that my service provider can deliver on his promises.
© Technews Publishing (Pty) Ltd. | All Rights Reserved.