Despite the fact that light users have a much smaller cloud footprint relative to heavy users, the vast majority of light users struggle to utilize their cloud efficiently.
One might expect that small footprints should be easier to control and contain. Others will claim that heavy users can be more efficient because they are more professional with the way they manage their footprint. Size actually may matter, a larger footprint may indeed require more automation and tools, or justify larger dedicated operation teams.
Lets start with a basic scenario where there is a sudden peak in the demand for an application service as the amount of clients’ requests increase. This event leads to a direct and immediate impact of the loa placed on the web servers that host the service. In the traditional world, the number of servers is fixed, therefore an overload adversely affects the application performance and the service may slow down or even be terminated. The IT team would want to restore the environment functionality and bring the service up as soon as possible. The immediate impact of such an event on the business can be devastating. Starting with this simple understanding, we can move into the world of cloud computing use including resources consumption, while relating to the key differences between the traditional data center and today’s cloud technologies.
Traditionally delivering high availability often meant replicating everything. However, today with the option of going to the cloud we can say that providing two of everything is costly. High availability should be planned and achieved at several different levels: including the software, the data center and the geographic redundancy.