Tuesday, November 26, 2013

Microsoft generally doesn't innovate

The cool reception of Microsoft's Surface tablets, Windows 8, and Windows RT has people complaining about Microsoft's (in)ability to innovate. Somehow, people have gotten the idea that Microsoft must innovate to stay competitive.

Looking at the history of Microsoft products, I cannot help but think that innovation has been only a small part of their success. Microsoft has been successful due to its marketing, its contracts, its monopoly on Windows, and its proprietary formats.

Still not convinced? Consider these Microsoft products, and their "innovativeness":

  • MS-DOS: purchased from another company
  • C compiler: purchased from Lattice, initially
  • Windows: a better version of OS/2, or MacOS, or AmigaDOS
  • SourceSafe: purchased from another company
  • Visual Studio: a derivation of their earlier IDE, cloned from Borland's TurboPascal
  • C# and .NET: a copy of Java and the JVM
  • Windows RT: a variant of Apple's iOS
  • Azure: a product to compete with Amazon.com's web services and cloud
  • the Surface tablet: a variant on Apple's iPad
  • Word: a better version of WordPerfect
  • Excel: a better version of Microsoft's Multiplan, made to compete with Lotus 1-2-3
  • the Xbox: a game console to compete with Sony and Nintendo game consoles

I argue that Microsoft is an excellent copier of ideas, and not an innovator. All of these products were not innovations by Microsoft.

Some might observe that the list of Microsoft products is much longer than the one I have presented. Others my observe that Microsoft continually improves its products, especially after purchasing one from another company. (Certainly the case for their C compiler, Visual SourceSafe, and C#.)

To be fair, I should list the innovative products from Microsoft:

  • Microsoft BASIC
  • Visual BASIC

Microsoft is not devoid of innovation. But innovation is not Microsoft's big game. Microsoft is better at copying existing products and technologies, re-casting them into Microsoft's own product line, and improving them over time. Those are their strengths.

People may decry Microsoft's lack of innovation. But this is not a new development. Over its history, Microsoft has focussed on other strategies, and gotten good results.

I don't worry about Windows 8 and the Surface tablets being "non-innovative". They are useful products, and I have confidence in Microsoft's abilities to make them work for customers.

Monday, November 25, 2013

The New Aristocracy

The workplace is an interesting study for psychologists. It has many types of interactions and many types of stratifications of employees. The divisions are always based on rank; the demonstrations of rank are varied.

I worked in one companyin which rank was indicated by the type, size, and location of one's workspace. Managers were assigned offices (with doors and solid walls), senior technical people were assigned cubicles next to windows, junior technical employees were assigned cubicles without windows, and contract workers were "doubled up" in windowless cubicles.

In another company, managers were issued color monitors and non-managers were issued (cheaper) monochrome monitors.

We excel at status symbols.

The arrival of tablets (and tablet apps) gives us a new status symbol. It allows us to divide workers into those who work with keyboards and those who work without keyboards. The "new aristocracy" will be, of course, those who work without keyboards. They will be issued tablets, while the "proletariat" will continue to work with keyboards.

I don't expect that this division will occur immediately. Tablets are quite different from desktop PCs and the apps for tablets must be different from desktop apps. It will take time to adapt our current applications to the tablet.

Despite their differences, tablets are -- so far -- much better at consuming information, while PCs are better at composing information. Managers who use information to make decisions will be able to function with tablets, while employees who must prepare the information will continue to do that work on PCs.

I expect that the next big push for tablet applications will be those applications used by managers: project planning software, calendars, dashboards, and document viewers.

The new aristocracy in the office will be those who use tablets.

Thursday, November 21, 2013

The excitement of new tech

GreenArrays has introduced the GA144 chip, which contains 144 F18 processors. They also have a prototyping circuit board for the GA144. These two offerings intrigue me.

The F18 is a processor that uses Forth as its instruction set. That in itself is interesting. Forth is a small, stack-oriented language, initially developed in the 1960s (Wikipedia asserts the origin at 1958) and created to run on diverse architectures. Like C, it is close to hardware and has a small set of native operations. The Forth language lets the user define new "words" and build their own language.

The GA144 has 144 of these processors.

The F18 and the GA144 remind me of the early days of microcomputers, when systems like the Mark-8 and the Altair were available. These "homebrew" systems existed prior to the "commercial" offerings of the Apple II and the Radio Shack TRS-80. They were new things in the world, unlike anything we had seen before.

We were excited by these new microcomputers. We were also ignorant of their capabilities. We knew that they could do things; we didn't know how powerful they would become. Eventually, the commercial systems adopted the IBM PC architecture and the MS-DOS operating system (later, Windows) and became ubiquitous.

I'm excited by the GA144. It's new, it's different, and it's potent. It is a new approach to computing. I don't know where it will take us (or that it will succeed at taking us anywhere) -- but I like that it offers us new options.

Wednesday, November 20, 2013

We need a new UML

The Object Management Group has released a new version of UML. The web site for Dr. Dobb's asks the question: Do You Even Care? It's a proper question.

It's proper because UML, despite a spike of interest in the late 1990s, has failed to move into the mainstream of software development. While the Dr. Dobb's article claims ubiquity ("dozens of UML books published, thousands of articles and blogs posted, and thousands of training classes delivered"), UML is anything but ubiquitous. If anything, UML has been ignored in the latest trends of software: agile development techniques and functional programming. It is designed for large projects and large teams designing the system up front and implementing it according to detailed documents. It is designed for systems built with mutable objects, and functional programming avoids both objects and mutable state.

UML was built to help us design and build large complex systems. It was meant to abstract away details and let us focus on the structure, using a standard notation that could be recognized and understood by all practitioners. We still need those things -- but UML doesn't work for a lot of projects. We need a new UML, one that can work with smaller projects, agile projects, and functional programming languages.

Sunday, November 17, 2013

The end of complex PC apps

Businesses are facing a problem with technology: PCs (and tablets, and smart phones) are changing. Specifically, they are changing faster than businesses would like.

Corporations have many programs that they use internally. Some corporations build their own software, others buy software "off the shelf". Many companies use a combination of both.

All of the companies with whom I have worked wanted stable platforms on which to build their systems and processes. Whether it was a complex program built in C++, a comprehensive model built in a spreadsheet, or an office suite (word processor, spreadsheet, and e-mail), companies want to invest their effort in their custom solutions. They did not want to spend money or time on upgrades and changes to the operating system or commercially available applications.

While they dislike change, corporations are willing to upgrade systems. Corporations want long upgrade cycles. They want gentle upgrade paths, with easy transitions from one version to the next. They were happy with the old Microsoft world: Windows NT, Windows 2000, and Windows XP were excellent examples of the long, gentle upgrades desired by corporations.

That is no longer the world of PCs. The new world sees fast update cycles for operating systems, major updates that require changes to applications. For companies with custom-made applications, they have to invest time and effort in updating their applications to match the new operating systems. (Consider Windows Vista and Windows 8.) For companies with off-the-shelf applications, they have to purchase new versions that run on the new operating systems.

What is a corporation to do?

My guess is that corporations will seek out other platforms and move their apps to those platforms. My guess is that corporations will recognize the cost of frequent change in the PC and mobile platforms, and look for other solutions with lower cost.

If they do, then PCs will lose their title to the development world. The PC platform will not be the primary target for applications.

What are the new platforms? I suspect the two "winning" platforms will be web apps (browsers and servers), and mobile/cloud (tablets and phones with virtualized servers). While the front ends for these systems undergo frequent changes, the back ends are relatively stable. The browsers for web apps are mostly stable and they buffer the app from changes to the operating system. Tablets and smart phones undergo frequent updates; this cost can be minimized with simple apps that can be updated easily.

The big trend is away from complex PC applications. These are too expensive to maintain in the new world of frequent updates to operating systems.

Thursday, November 14, 2013

Instead of simplicity, measure complexity

The IEEE Computer Society devoted their November magazine issue to "Simplicity in IT". Simplicity is a desirable trait, but I have found that one cannot measure it. Instead, one must measure its opposite: complexity.

Some qualities cannot be measured. I learned this lesson as a sysadmin, managing disk space for multiple users and groups. We had large but finite disk resources (resources are always finite), shared by different teams. Despite the large disk resources, the combined usage of the teams exceeded our resources -- in other words, we "ran out of free space". My job was to figure out "where the space had gone".

I quickly learned that the goal of "where the space had gone" was the wrong one. It is impossible to measure, because space doesn't "go" anywhere. I substituted new metrics: who is using space, and how much, and how does that compare to their usage last week? These were possible to measure, and more useful. A developer who uses more than four times the next developer, and more than ten times the average developer is (probably) working inefficiently.

The metric "disk space used by developer" is measurable. The metric "change in usage from last week" is also measurable. In contrast, the metric "where did the unallocated space go" is not.

The measure of simplicity is similar. Instead of measuring simplicity, measure the opposite: complexity. Instead of asking "why is our application (or code, or UI, or database schema) not simple?", ask instead "where is the complexity?"

Complexity in source code can be easily measured. There are a number of commercial tools, a number of open source tools, and I have written a few tools for my own use. Anyone who wants to measure the complexity of their system has tools available to them.

Measuring the change in complexity (such as the change from one week to the next) involves taking measurements at one time and storing them, then taking measurements at a later time and comparing them against the earlier measurements. That is a little more complex that merely taking measurements, but not much more complicated.

Identifying the complex areas of your system give you an indicator. It shows you the sections of your system that you must change to achieve simplicity. That work may be easy, or may be difficult; a measure of complexity merely points to the problem areas.

* * * *

When I measure code, I measure the following:

  • Lines of code
  • Source lines of code (non-comments)
  • Cyclomatic complexity
  • Boolean constants
  • Number of directly referenced classes
  • Number of indirectly referenced classes
  • Number of directly dependent classes
  • Number of indirectly dependent classes
  • Class interface complexity (a count of member variables and public functions)

I find that these metrics let me quickly identify the "problem classes" -- the classes that cause the most defects. I can work on those classes and simplify the system.

Tuesday, November 12, 2013

Measuring technical debt is not enough

I've been working on the issue of technical debt. Technical debt is a big problem, one that affects many projects. (In short, technical debt is the result of short-cuts, quick fixes, and poor design. It leads to code that is complicated, difficult to understand, and risky to change.) Technical debt can exist within source code, a database schema, an interface specification, or any other aspect of a technical product. It can be large or small; it tends to start small and grow over time.

Technical debt is usually easy to identify. Typical indicators are:

  • Poorly named variables, functions or classes
  • Poorly formatted code is another
  • Duplicate code
  • An excessive use of 'true' and 'false' constants in code
  • Functions with multiple 'return' statements
  • Functions with any number of 'continue' statements
  • A flat, wide class hierarchy
  • A tall, narrow class hierarchy
  • Cyclic dependencies among classes

Most of these indicators can be measured -- with automatic means. (The meaningfulness of names cannot.) Measuring these indicators gives you an estimate of your technical debt. Measuring them over time gives you an estimate of the trajectory of technical debt -- how fast it is growing.

But measurements are not enough. What it comes down to is this: I can quantify the technical debt, but not the benefits of fixing it. And that's a problem.

It's a problem because very few people find such information appealing. I'm describing (and quantifying) a problem, and not quantifying (or even describing) the benefits of the solution. Who would buy anything with that kind of a sales pitch?

What we need is a measure of the cost of technical debt. We need to measure the effect of technical debt on our projects. Does it increase the time needed to add new features? Does it increase the risk of defects? Does it drive developers to other projects (thus increasing staff turnover)?

Intuitively, I know (or at least I believe) that we want to reduce technical debt. We want our code to be "clean".

But measuring the effect of technical debt is hard.

Monday, November 11, 2013

Cheap hardware means meetings are expensive

In the early days of computing, everything was precious. Hardware was expensive -- really expensive. Processors, memory, storage, and communications were so expensive that only large businesses and governments could afford computers. (Programmers were relatively cheap.)

Over time, the cost of hardware dropped, and the cost of people rose. In the 1970s and 1980s, there were several charts that showed two lines: a steadily decreasing line for hardware cost and a steadily increasing line for the cost of programmers.

Today, hardware is cheap. Memory is cheap, storage is cheap, network bandwidth is cheap, and even processors are cheap. The Raspberry Pi computer sells for about $30; add a few items and you have a complete system for under $200. Moreover, the GreenArrays GA144 chip offers 144 processors for $20; a usable system for an experimenter will run about $500.

My point is not that hardware has become cheap.

My point is that in the early days, when hardware was expensive, we formed processes and ideas for project management, and those ideas were based on the notion of expensive hardware. Our practices for program design were made to minimize the use of expensive resources. A sound idea -- at the time.

At that time, hardware was expensive and people were cheap. It was better to hold meetings to discuss ideas before testing them on the (expensive) hardware. It was better to hold design reviews. It was better to think and discuss before experimenting. Those ideas are the basis for project management today. Even with Agile Development techniques, the team decides on the features prior to the sprint.

With plentiful hardware (and plentiful software from open source), the cost equations have changed. It is now cheap to experiment and expensive to hold meetings. It is especially expensive to hold meetings in a single place, with all attendees present. The attendees have to travel to the meeting, they have to sit through sections of the meeting that don't apply to them, and they have to return. This cost is (relatively) low when all attendees are already co-located and the meeting is short; the cost grows as you add attendees and topics.

Technology has reduced the cost of hardware. Agile techniques and global competition have reduced the cost of software development. Now we face the cost of meetings. Is anyone measuring the cost of meetings? How much of a project's budget is spent on meetings? Once this number is known, I suspect that meetings will become a target for cost reduction.

Which is not to say that projects need no coordination. Far from it. As projects become larger and more ambitious, coordination becomes more important.

But the form of coordination doesn't necessarily have to be meetings.

Wednesday, November 6, 2013

More was more, but now less is more

IBM and Microsoft built their empires with the strategy "bigger and more features". IBM mainframes, over time, became larger (in terms of processor speed and memory capacity) and included more features. Microsoft software, over time, became larger (in terms of capacity) and included more features.

It was a successful strategy. IBM and Microsoft could win any "checklist battle" which listed the features of products. For many managers, the product with the largest list of features is the safest choice. (Microsoft and IBM's reputations also helped.)

One downside of large, complicated hardware and large, complicated software is that it leads to large, complicated procedures and data sets. Many businesses have developed their operating procedures first around IBM equipment and later around Microsoft software. When developing those procedures, it was natural to, over time, increase the complexity. New business cases, new exceptions, and special circumstances all add to complexity.

Businesses are trying to leverage mobile devices (tablets and phones) and finding that their long-established applications don't "port" easily to the new devices. They are focussing on the software, but the real issue is their processes. The complex procedures behind the software are making it hard to move business to mobile devices.

The user interfaces on mobile devices limit applications to much simpler operations. Perhaps our desire for simplicity comes from the size of the screen, or the change from mouse to touch, or from the fact that we hold the devices in our hands. Regardless of the reason, we want mobile devices to have simple apps.

Complicated applications of the desktop, with drop-down menus, multiple dialogs, and oodles of options simply do not "work" on a mobile device. We saw this with early hand-held devices such as the popular Palm Pilot and the not-so-popular Microsoft PocketPC. Palm's simple operation won over the more complex Windows CE.

Simplicity is a state of mind, one that is hard to obtain. Complicated software tempts one into complicated processes (so many fonts, so many spreadsheet formula operations, ...). Mobile devices demand simplicity. With mobile, "more" may be more, but it is not better. The successful businesses will simplify their procedures and their underlying business rules (perhaps the MBA crowd will prefer the words "streamline" or "optimize") to leverage mobile devices.


Monday, November 4, 2013

For local storage, we get what we want

I hear many complaints about IT equipment, but I have heard few complaints about the cost of storage (that is, disk drives). It wasn't always this way.

In the early microcomputer era, storage was cassette tape. If you were wealthy, storage was floppy disk. Floppy disk systems (and media) were not cheap. They cost both money and time; once you had the hardware you needed to write the software to use them. (CP/M fixed some of that.)

The early hard drives were large, hulking beasts that required special power supplies, dedicated cabinets, and extra care. My first hard drive was a 10 Megabyte disc that was the size of an eight inch floppy drive (or an old-style, paper edition of Webster's Dictionary). The cabinet (which contained the hard drive, an actual eight-inch floppy drive, and power supply) was the size of a microwave oven and weighed more than fifty pounds. The retail price was $5000 -- in 1979 dollars, more than an automobile.

Beyond the money, the time necessary to configure such a drive was non-trivial. Operating systems at the time could access at most 8 Megabytes. You could not use the entire hard drive at one time. Hard drives of 10 Megabytes were partitioned into smaller volumes with special software. These partitioning operations also took time.

People who had hard drives really wanted them. People who didn't have them complained about the cost. People who did have them complained about the time to configure them.

Over time, hard disks became smaller in physical size and required less power. Today, the "standard" hard drive is either a 3.5-inch or 2.5-inch drive that holds a Terabyte and costs less than $200 (in 2013 dollars). Adding such a drive to your existing PC is easy: plug it in and the operating system detects it automatically. Operating systems can address most commonly available hard drives; partitioning is no longer necessary. The "cost" of a hard drive, in terms of money and time, is trivial compared to the prices of that earlier age.

Which leads to a question: If we were, earlier, willing to spend time and money (lots of time and lots of money) on a hard drive, why are we unwilling to spend that time and money now? We could, after all, create disk arrays with huge amounts of storage (petabytes) by ganging together multiple hard drives into a cabinet. Fifty pounds of hard drives, power supply, and interface electronics could store lots and lots of data.

But we don't. We accept the market solution of 2 Terabyte drives and live with that.

The market for the early hard drives was the tinkerers and hackers. These were the folks who enjoyed configuring systems and re-writing operating systems. Today, those are a small percentage of the PC market. Current plug-compatible hard drives of 2 Terabytes or less are, for most people, good enough. These two factors tell me that capacity is good, but convenience is better.

We live with what the market offers (except for a few ornery hackers). We live with the trade-off between cost and convenience. I think we recognize that trade-off. I think we understand that we are living with certain choices.

And I think that we don't complain, because we understand that we have made that choice.