The original (and so far only) "killer app" was the spreadsheet. The specific spreadsheet was VisiCalc (or Lotus 1-2-3, depending on who you ask) and it was the compelling reason to get a personal computer.
We may see a killer app for AI, and from a completely unexpected direction: performance reviews.
Employee performance reviews, in large companies, often work as follows: each employee is rated on a number of items, frequently from 1 to 5 and sometimes as "meets expectations" or "needs improvement". Items range from meeting budgets and delivery dates to soft skills such as communication and leadership.
HR works to ensure that performance reviews are administered fairly, which means as consistently as possible, which often means "one size fits all". Everyone in the organization, from the entry-level developer to the vice president of accounting, all have the same performance review form and topics. It leads to developers being rated on "meeting budgets" and vice presidents of accounting being rated on "meeting delivery dates".
Just about everyone fears and dislikes the process. Employees dread the annual (or semiannual) review. Managers have no joy for it either.
This is where AI may be attractive.
Instead of a human-driven process, a company may look for an AI-driven process. The human-administered process is rife with potential for inconsistencies (including favoritism) and opens the company to lawsuits. Instead of expending effort to enforce consistent criteria, HR may choose to implement AI for performance reviews. (Managers may have little say in the decision, and many may be secretly relieved at such a change.)
This is a possibly horrifying concept. The mere idea of a computer (which is what AI is, at bottom) rating and ranking employees may be unwelcome among the ranks. The fear of "computer overlords" from the 1960s is still with us, and I suspect few companies would want to be the first to implement such a system.
I recognize that such a system cannot work in a vacuum. It would need input, starting with a list of job responsibilities, assigned tasks and deadlines, and status reports. Early versions will most likely get many things wrong. Over time, I expect they will improve.
Should we move to AI for performance reviews, I have some observations.
First, AI performance review systems may move outside of companies. Just as payroll processing is often outsourced, performance review systems might be outsourced too. The driver is risk avoidance, and companies that build their own performance review AI systems may build in subtle discrimination against women or minorities. An external supplier would have to warrant their system conforms to anti-discrimination laws -- a benefit to the client company.
Second, automating performance reviews could mean more frequent reviews, and more frequent feedback to employees. The choice of annual as a frequency for performance reviews is driven, I suspect, by two factors. First, they are needed to justify changes in compensation. Second, they are expensive to administer. The former mandates at least one per year, the second discourages anything more frequent.
But automating performance reviews should reduce effort and cost. Or at least reduce the marginal cost for reviews beyond the annual review.
Another result of more frequent performance reviews? More frequent information to management about the state of their workforce.
In sum, AI offers a way to reduce cost and risk in performance reviews. It also offers more frequent feedback to employees and more frequent information to management. I see advantages to the use of AI for this despised task.
Now all we need to do is bell the cat.
Thursday, February 23, 2017
Sunday, February 12, 2017
Databases, containers, and Clarke's first law
A blog post by a (self-admitted) beginner engineer rants about databases inside of containers. The author lays out the case against using databases inside containers, pointing out potential problems from security to configuration time to the problems of holding state within a container. The argument is intense and passionate, although a bit difficult for me to follow. (That, I believe, is due to my limited knowledge of databases and my even more limited knowledge of containers.)
I believe he raises questions which should be answered before one uses databases in containers. So in one sense, I think he is right.
In a larger sense, I believe he is wrong.
For that opinion, I refer to Clarke's first law, which states: When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.
I suspect that it applies to sysadmins and IT engineers just as much as it does to scientists. I also suspect that age has rather little effect, too. Our case is one of a not-elderly not-scientist claiming that databases inside of containers is impossible, or at least a Bad Idea and Will Lead Only To Suffering.
My view is that containers are useful, and databases are useful, and many in the IT field will want to use databases inside of containers. Not just run programs that access databases on some other (non-containerized) server, but host the database within a container.
Not only will people want to use databases in containers, there will be enough pressure and enough interested people that they will make it happen. If our current database technology does not work well with containers, then engineers will modify containers and databases to make them work. The result will be, quite possibly, different from what we have today. Tomorrow's database may look and act differently from today's databases. (Just as today's phones look and act differently from phones of a decade ago.)
Utility is one of the driving features of technology. Containers have it, so they will be around for a while. Databases have it (they've had it for decades) and they will be around for a while. One or both may change to work with the other.
We'll still call them databases, though. The term is useful, too.
I believe he raises questions which should be answered before one uses databases in containers. So in one sense, I think he is right.
In a larger sense, I believe he is wrong.
For that opinion, I refer to Clarke's first law, which states: When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.
I suspect that it applies to sysadmins and IT engineers just as much as it does to scientists. I also suspect that age has rather little effect, too. Our case is one of a not-elderly not-scientist claiming that databases inside of containers is impossible, or at least a Bad Idea and Will Lead Only To Suffering.
My view is that containers are useful, and databases are useful, and many in the IT field will want to use databases inside of containers. Not just run programs that access databases on some other (non-containerized) server, but host the database within a container.
Not only will people want to use databases in containers, there will be enough pressure and enough interested people that they will make it happen. If our current database technology does not work well with containers, then engineers will modify containers and databases to make them work. The result will be, quite possibly, different from what we have today. Tomorrow's database may look and act differently from today's databases. (Just as today's phones look and act differently from phones of a decade ago.)
Utility is one of the driving features of technology. Containers have it, so they will be around for a while. Databases have it (they've had it for decades) and they will be around for a while. One or both may change to work with the other.
We'll still call them databases, though. The term is useful, too.
Monday, February 6, 2017
Software development and economics
One of the delights of working in the IT field is that it interacts with so many other fields. On one side is the "user" areas: user interface design, user experience, and don't forget accessibility and section 508 compliance. On the other side is hardware, networking, latency, CPU design, caching and cache invalidation, power consumption, power dissipation, and photolithography.
And then there is economics. Not the economics of buying a new server, or the economics of cloud computing, but "real" economics, the kind used to analyze nations.
Keynesian economics, in short, says that during an economic downturn the government should spend money even if it means accumulating debt. By spending, the government keeps the economy going and speeds the recovery. Once the economy has recovered, the government reduces spending and pays down the debt.
Thus, Keynesian economics posits two stages: one in which the government accumulates debt and one in which the government reduces debt. A "normal" economy will shift from recession to boom (and back), and the government should shift from debt accumulation to debt payment (and back).
It strikes me that this two-cycle approach to fiscal policy is much like Agile development.
The normal view of Agile development is the introduction of small changes, prioritized and reviewed by stakeholders, and tested with automated means. Yet a different view of Agile shows that it is much like Keynesian economics.
If the code corresponds to the economy, and the development team corresponds to the government, then we can build an analogy. The code shifts from an acceptable state to an unacceptable state, due to a new requirement that is not met. In response, the development team implements the new requirement but does so in a way that incurs debt. (The code is messy and needs to be refactored.) At this point, the development team has incurred technical debt.
But since the requirement has been implemented, the code is now in an acceptable state. (That is, the recession is over and the economy has recovered.) At this point, the development team must pay down the debt, by improving the code.
The two-cycle operation of "code and refactor" matches the economic version of "spend and repay".
The economists have it easy, however. Economic downturns occur, but economic recoveries provide a buffer time between them. Development teams must face stakeholders, who once they have a working system, too often demand additional changes immediately. There is no natural "boom time" to allow the developers to refactor the code. Only strong management can enforce a delay to allow for refactoring.
And then there is economics. Not the economics of buying a new server, or the economics of cloud computing, but "real" economics, the kind used to analyze nations.
Keynesian economics, in short, says that during an economic downturn the government should spend money even if it means accumulating debt. By spending, the government keeps the economy going and speeds the recovery. Once the economy has recovered, the government reduces spending and pays down the debt.
Thus, Keynesian economics posits two stages: one in which the government accumulates debt and one in which the government reduces debt. A "normal" economy will shift from recession to boom (and back), and the government should shift from debt accumulation to debt payment (and back).
It strikes me that this two-cycle approach to fiscal policy is much like Agile development.
The normal view of Agile development is the introduction of small changes, prioritized and reviewed by stakeholders, and tested with automated means. Yet a different view of Agile shows that it is much like Keynesian economics.
If the code corresponds to the economy, and the development team corresponds to the government, then we can build an analogy. The code shifts from an acceptable state to an unacceptable state, due to a new requirement that is not met. In response, the development team implements the new requirement but does so in a way that incurs debt. (The code is messy and needs to be refactored.) At this point, the development team has incurred technical debt.
But since the requirement has been implemented, the code is now in an acceptable state. (That is, the recession is over and the economy has recovered.) At this point, the development team must pay down the debt, by improving the code.
The two-cycle operation of "code and refactor" matches the economic version of "spend and repay".
The economists have it easy, however. Economic downturns occur, but economic recoveries provide a buffer time between them. Development teams must face stakeholders, who once they have a working system, too often demand additional changes immediately. There is no natural "boom time" to allow the developers to refactor the code. Only strong management can enforce a delay to allow for refactoring.
Subscribe to:
Posts (Atom)