Traditionally, capacity planning was essentially looking at new servers funded by applications able to achieve capital investment funding. With a move to the cloud, the assumption is that needed resources will be available on demand. In my mind, this results in less analysis and insight as to total possible demand per application, and could result in a lack of insight into potential total demand. I could easily see the number of servers provisioned skyrocketing as the number of applications increase, which I assume is pretty much inevitable given that cost of applications will be lower and it is relatively easy to obtain resources for each application. In light of this, how can capacity planning keep pace with reality so that there is some kind of accuracy and predictability so as to avoid the problems of inefficient use and high resource pricing to application groups?