Many moons ago, computers were large expensive beasts that required much care and attention. Since they were expensive, only large organizations could purchase (or lease) them, and those large organizations monitored their use. The companies and government agencies wanted to ensure that they were spending the right amount of money. A computer system had to provide the right amount of computations and storage; if it had excess capacity, you were spending too much.
Some time later (moons ago, but not so many moons) computers became relatively cheap. Personal computers were smaller, easier to use, and much less expensive. Most significantly, they had no way to monitor utilization. While some PCs were more powerful than others, there were no measurements of their actual use. It was common to use personal computers for only eight hours a day. (More horrifying to the mainframe efficiency monitors, some people left their PCs powered on overnight, when they were performing no work.)
Cloud technologies allow us to worry about utilization factors again. And we monitor them. This is a big change from the PC era. With PCs, we cared little for utilization rates.
Perhaps we monitor cloud technologies (virtual servers and such) because they are metered; we pay for every hour that they are active.
If we start worrying about utilization rates for cloud resources, I suspect that we will soon bring back another habit from the mainframe era: chargebacks. For those who do not remember them, chargebacks are mechanisms to charge end-user departments for computing resources. Banks, for example, would have a single mainframe used by different departments. With chargebacks, the bank's IT group allocates the expenses of the mainframe, its software, storage, and network usage to the user departments.
We did not have chargebacks with PCs, or with servers. It was a blissful era in which computing power was "too cheap to meter". (Or perhaps too difficult to meter.)
With cloud technologies, we may just see the return of chargebacks. We have the ability, and we probably will do it. Too many organizations will see it as a way of allocating costs to the true users.
I'm not sure that this is a good thing. Organizational "clients" of the IT group should worry about expenses, but chargebacks provide an incomplete picture of expenses. They are good at reporting the expenses incurred, but they deter cooperation across departments. (This builds silos within the organization, since I as a department manager do not want people from other departments using resources "on my dime".)
Chargebacks also force changes to project governance. New projects are viewed not only in the light of development costs, but also in the chargeback costs (which are typically the monthly operations costs). If these monthly costs are known, then this analysis is helpful. But if these costs are not known but merely estimated, political games can erupt between the department managers and the cost estimators.
I don't claim that chargebacks are totally evil. But chargebacks are not totally good, either. Like any tool, they can help or harm, depending on their use.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment