Saturday, August 10, 2013

Software can work, really

Facebook and Twitter have shown us that big customer bases are possible. They have also shown us that the software for big customer bases must be reliable, predictable, and without defects.

A long time ago, I worked on a project for a large company that provided software to customers. That software let the customers do business with the company, and let the company get clean data about transactions. The software was, in effect, a large data entry system (complete with verification).

The software ran on PC-DOS and MS-DOS. We issued updates on floppy disks (and performed some gymnastics to keep the updates to one disk). We had a help desk to provide around-the clock support to customers. Our software was complicated (partly because the business rules were complicated) and not always easy to use. A help desk was necessary, to keep users happy (and using the software).

We provided this software to 5000 customers, and we thought it was a large customer base. (And it was, compared to a lot of other companies at the time.)

One day, our project manager announced a new goal for our customer base: 50,000 customers. We, the development team, were stunned. But despite our amazement, we did expand the customer base. Such an expansion required a number of changes: we delivered updates electronically, not on floppy disk. We added remote diagnostics to the software. We automated our tests. But some things remained unchanged: the business rules remained complicated, the software remained complicated, and the help desk remained in operation. We expanded it to handle the increased number of calls from the larger user base.

That help desk was expensive.

Compared to today's big social media businesses, a customer base of 50,000 is tiny. Facebook is growing towards 1 billion customers. Twitter and LinkedIn have user bases in the multi-hundred-thousand range.

And they do *not* have help desks.

With a user base of one billion, a failure rate of one percent would result in ten million calls. Staffing a help desk to answer and resolve that many calls (even spread over a few days) is not expensive; it is horribly expensive.

In the mobile world, users must be able to use the software without assistance. But going without a help desk has a price: The software must "just work". If the software fails (for any reason, including server outages), customers abandon it and don't come back.

This is what big user-base operations have taught us: That software can "just work". Facebook, Twitter, and other companies with large numbers of users have successfully designed, built, and deployed software that is reliable and simple enough to use without a help desk (or help pages). Part of their success is due to simple business rules. But part of their success is that their software works as it should; they have developed software with high quality.

Wednesday, August 7, 2013

Microsoft adjusts the prices for Surface tablets

A lot has been said about the pricing of the Microsoft Surface tablets. Here's my view:

Microsoft introduced the Surface RT at $500 (well, $499) and the Surface Pro at $1000 (okay, $999).

In June, Microsoft announced a promotion: a free keyboard with the purchase of a Surface RT. The keyboard normally sold for $100, so June saw a discount of $100.

In July, Microsoft dropped the price of the Surface RT by $150 -- and cancelled the promotion for the free keyboard. In effect, Microsoft dropped the price by an additional $50. (For some reason, lots of the media claimed that this was "slashing the price".)

In August, Microsoft reduced the price of the Surface Pro by $100.

Looking at the trend, I would say that Microsoft is testing the market, gradually dropping the price and measuring demand. Economics 101, so to speak.

My guess is that Microsoft will keep measuring and dropping the price. Or perhaps dropping the price and removing features. The cannot change the hardware (assuming they sell the existing units) so they may remove software. The natural selection would be to remove the "no business use" version of Microsoft Office, but with the dearth of apps in the Microsoft app store, it would render the device all but useless. Could they remove something else?

Microsoft seems to be announcing changes every month. Let's see what September brings.

Thursday, August 1, 2013

A review of PCs from tablet perspective

I had the opportunity to try several of the PC-type tablets that are now on the market. Wow, they are very different from the standard tablet. Here is my review.

I tried a Lenovo ThinkPad Edge "laptop", a Dell Inspiron "desktop", and an Apple MacBook "laptop". For the ThinkPad and Inspiron, I used Microsoft Windows 7 and Ubuntu Linux. The MacBook ran Apple's MacOS.

I'm not sure where to begin with my review. The "PC experience" is quite different from the normal experience of a tablet.

The first difference is the size. PCs come in two basic styles: desktop and laptop. The laptop is similar to a tablet with a Bluetooth keyboard, except somewhat heavier. The keyboard is physically attached, which I found odd. I initially thought this was for protection (the keyboard is hinged and folds over the screen) and convenience in travelling (you won't lose the keyboard) but later I found that the real reason was quite different - various hardware is built under the keyboard, and wires connect the central circuitry to the screen.

The laptop flavor of a PC should really be called "desktop", since the only practical way to use it is on a desktop. One cannot separate the keyboard, and balancing the keyboard and screen on your lap is awkward at best. Both the Lenovo and the Apple PCs used this design.

The desktop version (the Dell Inspiron, but a quick survey shows that all brands use this design) is also incorrectly named. The screen is large -- too large to be portable. It requires its own stand, which places the screen at a comfortable viewing angle. (Most PC screens allow the use to adjust the height and viewing angle.) The desktop PC also includes a keyboard, one that is connected to the unit with a cable.

The name "desktop" is wrong because in addition to the screen and keyboard there is also a separate, large box that must be attached to the screen. (The keyboard connects to this box, not the screen.) This large box belongs not on a desktop but on the floor, and the users I talked with indicated that they all stored this box on the floor.

The desktop version of the PC is not portable. The combination of the large screen, separate keyboard, and large "processor" box are too cumbersome to carry. In addition, the screen and processor box both require power from 120VAC, and neither have the capability for battery operation.

The laptop versions of the PC are somewhat portable. They can fold for carrying, and they have battery for some use. (The manufacturers claim six to eight hours; users I spoke with indicated three to five hours. My tests fell in line with users.)

The next big differences one notices are the screen, keyboard, and touch interface. The screen is large, with ample real estate for displaying apps. The keyboards are physical keyboards, not on-screen keyboards (in fact there is no support for on-screen keyboards). Physical keyboards took some getting used to, since the keys do travel and provide excellent tactile feedback. But being physical, they cannot change to reflect different modes or languages, with the result being more keys to handle special symbols and indicators to show "caps" mode. (There were some keys with unusual names such as "Print Screen", "Scroll Lock", and "Pause", but I found no use for them. Perhaps they are for future expansions?)

Another noticeable difference is that the screen does not support touch. This was frustrating, as I kept touching the screen and waiting for something to happen. After a few seconds, I realized that I had to use the keyboard or a touchpad (or mouse -- more on that later).

The Lenovo and Apple laptops came with built-in touchpads. These are small (3" by 4") pads below the keyboard that let you control a small "cursor" on the screen. The cursor is normally shaped as an arrow pointing in the north-by-northwest direction (some modes change this shape) and you can move the cursor by touching and swiping on the touchpad. Since the touchpad is relatively far from the screen, this design requires the ability to touch the pad while you look at the screen -- something that I suspect few people will want to learn.

The desktops did not use a touchpad, but instead had an extra device called a "mouse". (Where did they get that names?) It is a small, roughly half-sphere, object that one drags on a flat surface. It too, controls a "cursor" on the screen, and it was harder to use than the touchpad! Proper use requires looking at the screen and holding the mouse off to the side, again using coordinated actions without looking at one of your hands. I found that my desk at home was a bit small for such a computer; I kept dragging the mouse off the edge of the desk.

The PC is not a complete disaster. All units I evaluated had a cable for internet access. I had to physically connect the units to my home router (finally understanding why it had those "extra" ports) and network access was fast and consistent. The Apple and Lenovo PCs (the laptops) also supported the standard wi-fi connections.

PCs have enormous memory, and apps can take advantage of it. My evaluation units all came with 4GB of RAM, which is small for PCs. This leads to apps that are much larger and more complex. More on apps later. RAM is temporary storage and not the usual memory we think of in tablets.

PCs also have enormous storage. which is the equivalent of a tablet's normal memory. My evaluation units came with 300GB to 500GB! The sheer amount boggles me. (Although to be honest, I'm not sure why one needs so much storage. With the fast and reliable network connection, one could easily push data to servers, without using local storage.)

A few more things about hardware, before I move on to operating systems and apps: PCs have lots of ports for accessory devices. Perhaps this is a result of their size; they can afford the space for circuitry and jacks. The PC seems designed for external hardware; the keyboard and "mouse" must be connected through these ports.

The laptop units had built-in forward-facing cameras, the desktop PCs had no cameras. Desktop PCs can have cameras as an extra device (using one of the ports).

None of the units had accelerometers, compasses, or GPS antennae. For the "desktop" units, that makes sense as they are made to be stationary. I'm not sure why they were omitted from the "laptop" PCs which theoretically could move, and certainly have the space for them.

I tried three operating systems: Microsoft Windows, Apple MacOS, and Ubuntu Linux. All are quite similar, and all are significantly different from the typical tablet operating system.

The Lenovo and Dell computers came with Windows 7 pre-installed. The Apple came with Apple MacOS pre-installed. I installed Linux on the Lenovo and Dell, using a technique called "partitioning". This technique lets you allocate the PCs storage between the two operating systems. (With 300GB of storage, there is a lot to go around.)

A "partitioned" system presents a menu when started, letting you select which operating system you want. The menu has a twenty-second timeout, starting Ubuntu if you take no action. (I think that this is configurable.)

All three PC operating systems use a desktop metaphor. The main screen contains icons for apps, and you start an app not by touching the icon (remember, the screen doesn't support touch!) but by dragging the mouse cursor to an icon and double-clicking on it.

With the large screen, apps don't fill the entire screen but take only a portion of it. The app displays a "window" (a term used by all three operating systems, not just Microsoft Windows) and you can run several apps at the same time. This is a nice feature of PCs, as you can see the status of multiple apps at the same time. (Although too many apps at once can be overwhelming.)

The smaller-than-screen size of apps also lets you move app windows on your "desktop". A complicated sequence of moving the mouse, pressing and long-holding a button, moving the mouse while long-holding, and then releasing the button lets you move windows on the screen. This lets you arrange apps you your liking and move important apps to prominent locations.

The different operating systems had different ideas about app purchases. Linux has a store for selecting and purchasing apps, much like a typical tablet. Apple MacOS has an "App Store" but many apps are not available though it and must be purchased separately. For Microsoft's Windows 7, all apps must be purchased separately. I found the Linux arrangement the most friendly, since there is one place to go for apps. [Edit: I later learned that in Linux you can also download apps from other sources.]

The lack of a central store for apps leads to another difference: updates. Without the central store to coordinate versions of apps, each app must check for its own updates. I can't imagine why anyone would want to distribute software without the infrastructure of an app store; doing so requires duplicating code to check versions, download updates, and apply updates in every app! It seems to put a large burden on the app development team (and the testing team).

All three operating systems handled updates for themselves. Windows, MacOS, and Linux all automatically found, downloaded, and applied updates. Ubuntu Linux, with its store, considered the OS update to be "just another update" and bundled it into a list with app updates. Windows and MacOS handled OS updates and did nothing for apps. (I suspect the MacOS app store would handle updates for apps, but I had none during my evaluation.)

PC apps tend to focus on office work, and given the hardware, this is no surprise. The physical keyboard excels at text entry, and the lack of geolocation services removes a number of apps from the PC's repertoire. An app such as FourSquare is not possible without location services, and Facebook is limited without a camera.

In conclusion, I find the idea of the PC misguided: its powerful hardware is torn between local applications (processor and storage) and normal service-based apps (reliable and fast network). The absence of touch support for the screen and the physical keyboard pushes one to text-oriented data, and the clumsy touchpad (or even worse, mouse) pushes one away from UI operations. Forcing users to hunt down apps without a central store places a burden on the users. Forcing apps to update themselves places a burden on developers.

Sunday, July 28, 2013

The style curve

At a recent conference, a fellow attendee asked about best practices for the live tiles on Windows 8.

Live tiles are different from the standard icons in that they can show information and change over time. Windows 8 comes with a number of live tiles: the clock, news, Bing search, and entertainment are a few.

For the question of best practices, my take is that we're too early in what I call the "style curve" of Windows live tiles. The style curve is similar to the "hype curve", in which new technologies are born, receive some hype, then become disparaged as they fail to cure all of our ills, and finally become accepted as useful. See more about the hype curve on wikipedia.

The style curve applies to new technologies, and is similar to the hype curve in that a technology is created, given lots of attention, disparaged, and then accepted. Here are the phases:

Creation The technology is created and made available.

Experimentation People test out the new technology and test its limits.

Overuse People adopt the new technology and use it, but with poor judgement. They use it for too many things, or too many situations, or with too many combinations.

Avoidance People dislike the overuse (or the poor taste) and complain. Some actively avoid the new technology.

Best practices A few folks use the technology with good taste. They demonstrate that the technology can be used without offending people's sensibilities. The techniques they use are dubbed "best practices".

Acceptance The techniques of restrained use (the best practices) are adopted by most folks.

Previous technologies have followed this curve. Examples include typefaces and fonts in desktop publishing (and later word processing) and animated images in web pages.

Some readers will remember the early days of the web and some of the garish designs that were used. The memories of spinning icons and blinking text may still be painful. This was the "overuse" phase of the style cycle for web pages. Several shops banned outright the use of the blink tag -- the "avoidance" phase. Now people understand good design principles for web pages. (Which do not include the blink tag, thankfully.)

Desktop publishing, powered by Windows and laser printers, allowed people to use a multitude of typefaces and fonts in their documents. And use them they did. Today we have use a limited set of typefaces and fonts in any one document, and shops have style guides.

Coming back to live tiles, I think we are at the "experimentation" phase of the style cycle. We don't know the limits on live tiles and we don't know the best practices. We have to go through the "overuse" and "avoidance" phases before we can get to "best practices". In other words, the best practices are a matter of knowing what not to do. But we have to try everything to see what works and what doesn't work.

Be prepared for some ugly, garish, and annoying live tiles. But know that style will arrive in the future.

Friday, July 26, 2013

Projects must move and grow

Software development projects are like people, in that they grow over time. Technology changes around us, and we must allow for changes to our projects.

The things that change are:

  • Tools, such as editors and compilers
  • Processes, such as design reviews and code walk-throughs
  • Source code

Tools evolve over time. New versions of compilers are released. Sometimes a new version will break existing code. The C++ standards committee works very hard to prevent such breakages. (Although they have made some changes that broke code, after long deliberations.)

A project that practices good hygiene will upgrade tools. The change does not have to be immediate, but it is within a reasonable period of time. (Perhaps six months.)

The "best practices" for software development change over time. Often these changes are made possible with the invention of tools or the release of a new product. Past changes have included the use of version control, the use of lint utilities, code reviews, and automated testing. None of these were available (cheaply) in the 1990s, and they are today. Version control has undergone several generations, from the early PVCS and CVS systems to SourceSafe and Subversion and today's TFS and 'git'.

The source code changes over time, too. Not just for the addition of new features, but improvements to the code. Programming techniques, like tools and best practices, change over time. We moved from procedural programming to object-oriented programming. We've developed patterns such as "Model View Controller" and "Model View ViewModel" which help organize our code and reduce complexity.

Changes for tools, techniques, and source code take time and effort. They must be planned and incorporated into releases. They entail risk; any change can introduce a defect. To make matters worse, such changes are "internal" and offer no direct benefit to the users. The changes are for the benefit of the development team.

I have seen a number of projects start with the then-current set of tools and techniques, only to become established and stay with those tools and techniques. The once-modern project ages into a legacy effort. It is a trap that is all too easy: the demand for new features and bug fixes overwhelm the team and there is no time for non-revenue improvements.

The "no return on investment" argument is difficult to counter. Given finite resources and the choice between a feature that provides revenue against a change that provides no revenue, it is sensible to go with the revenue feature.

Without these internal changes, the project cost rises. The increases are caused by two factors: code complexity and ability to hire staff.

Over time, changes (especially rushed changes) increase the complexity of the code and changes become much more difficult. The code, once neat and organized, becomes messy. Features are added quickly, with compromises made to quality for delivery time. Each additional change adds to the "mess" of the code.

The world of software development advances, but the project remains stuck in its original era. The tools, initially the most modern, age. They become yesterday's tools and techniques.

Another problem is staffing. Few developers are willing to work on a project that has hard-to-maintain code, old tools, and outdated processes. The few that are will do so only at elevated rates. This increases the cost of future maintenance.

Allocating time and effort (and perhaps money) to keep the project up to date is not easy. The payoff is in the long term. A good project manager balances the short-term needs and the long-term goals.

Tuesday, July 23, 2013

The killer app for Microsoft Surface is collaboration

People brought PCs into the office because PCs let people become more effective. The early days were difficult, as we struggled with them. We didn't know how to use PCs well, and software was difficult to use.

Eventually, we found the right mix of hardware and software. Windows XP was powerful enough to be useful for corporations and individuals, and it was successful. (And still is.)

Now, people are struggling with tablets. We don't know how to use them well -- especially in business. But our transition from PC to tablet will be more difficult than the transition from typewriter to PC.

Apple and Google built a new experience, one oriented for consumers, into the iPad and Android tablet. They left the desktop experience behind and started fresh.

Microsoft, in targeting the commercial market, delivered word processing and spreadsheets. But the tablet versions of Word and Excel are poor cousins to their desktop versions. Microsoft has an uphill battle to convince people to switch -- even for short periods -- from the desktop to the tablet for word processing and spreadsheets.

In short, Apple and Google have green fields, and Microsoft is competing with its own applications. For the tablet, Microsoft has to go beyond the desktop experience. Word processing and spreadsheets are not enough; it has to deliver something more. It needs a "killer app", a compelling use for tablets.

I have a few ideas for compelling office applications:

  • calendars and scheduling 
  • conference calls and video calls
  • presentations not just on projectors but device-to-device
  • multi-author documents and spreadsheets

The shift is a one from individual work to collaborative work. Develop apps to help not individuals but teams become more effective.

If Microsoft can let people use tablets to work with other people, they will have something.

Wednesday, July 17, 2013

The Surface RT needs fanboys

The response to Microsoft's Surface tablets has been less than enthusiastic. Various people have speculated on reasons. Some blame the technology, others blame the price. I look at the Surface from a market viewpoint.

I start by asking the question: Who would want a Surface?

Before I get very far, I ask another question: What are the groups who would want (or not want) a Surface tablet?

I divide the market into three groups: the fanboys, the haters, and the pragmatists.

The Microsoft fans are the people who dig in to anything that Microsoft produces. They would buy a Surface (and probably already have).

Fanboy groups are not limited to Microsoft. There are fanboys for Apple. There are fanboys for Linux (and Android, and Blackberry...).

Just as there are fans for each of the major vendors, there are also the haters. There is the "Anyone But Microsoft" crowd. They will not be buying the Surface. If anything, they will go out of their way to buy another product. (There are also "Anyone But Apple" and "Anything But Linux" crowds, too.)

In between these groups are the pragmatists. They buy technology not because they like it but because it works, it is popular, and it is low-risk. For desktops and servers, they have purchased Microsoft technologies over other technologies -- by large margins.

The pragmatists are the majority. The fanboys and the haters are fringe groups. Vocal, perhaps, but small populations within the larger set.

It was not always this way.

In the pre-PC days, people were fanboys for hardware: the Radio Shack TRS-80, the Apple II, the Commodore 64... even the Timex Sinclair had fans. Microsoft was hardware-neutral: Microsoft BASIC ran on just about everything. Microsoft was part of the "rebel alliance" against big, expensive mainframe computers.

This loyalty continued in the PC-DOS era. With the PC, the empire of IBM was clearly present in the market. Microsoft was still viewed as "on our side".

Things changed with Windows and Microsoft's expansion into the software market. After Microsoft split Windows from OS/2 and started developing primary applications for Windows, it was Microsoft that became the empire. Microsoft's grinding competition destroyed Digital Research, Borland, Wordperfect, Netscape, and countless other companies -- and we saw Microsoft as the new evil. Microsoft was no longer one of "us".

Fanboys care if a vendor is one of "us"; pragmatists don't. Microsoft worked very hard to please the pragmatists, focussing on enterprise software and corporate customers. The result was that the pragmatist market share increased at the expense of the fanboys. (The "Anyone But Microsoft" crowd picked up some share, too.)

Over the years the pragmatists have served Microsoft well. Microsoft dominated the desktop market and had a large share of the server market. While Microsoft danced with the pragmatists, the fanboys migrated to other markets: Blackberry, Apple, Linux. Talk with Microsoft users and they generally fall into three categories: people who pick Microsoft products for corporate use, people who use Microsoft products because the job forces them to, or people who use Microsoft products at home because that is what came with the computer. Very few people go out of their way to purchase Microsoft products. (No one is erasing Linux and installing Windows.)

Microsoft's market base is pragmatists.

Pragmatists are a problem for Microsoft: they are only weakly loyal. Pragmatists are, well, pragmatic. They don't buy a vendors technology because they like the vendor. They buy technology to achieve specific goals (perhaps running a company). They tend to follow the herd and buy what other folks buy. The herd is not buying Surface tablets, especially Surface RT tablets.

Microsoft destroyed the fanboy market base. Or perhaps I should say "their fanboy market base", as Apple has retained (and grown) theirs.

Without a sufficiently large set of people willing to take chances with new technologies, a vendor is condemned to their existing product designs (or mild changes).

For Microsoft to sell tablets, they need fanboys.