Tuesday, April 14, 2015

Sans keyboard

Will the next big trend be devices without keyboards?

Not simply devices with virtual keyboards, such as those on phones and tablets, but devices without keyboards at all? Are such devices possible?

Well, of course a keyboard-less device is possible. Not only is it possible, we have had several popular devices that were used without keyboards. They include:

The pocket-watch Arguably the first wearable computing device.

The wrist-watch The early, pre-digital watches kept track of time. Some had indicators for the day of month; a few for day of week. Some included stop-watches.

Since we mentioned them, Stop-watches.

Pocket transistor radios Controlled with one dial for volume and another dial for tuning, they kept us connected to news and music.

Apple iPod

Film cameras From the Instamatic cameras that used 126 or 110 type film to Polaroid camera that yielded instant photos to SLR cameras (with dials and knobs for focus, aperture, and shutter speed).

Digital cameras Knobs and buttons, but no keyboard.

Some e-book readers (or e-readers) My Kobo e-reader lets me read books and it has no keyboard. Early Amazon.com Kindle e-readers had no keyboard.

So yes, we can have devices without keyboards.

Keyboard-less devices tend to be used to consume content. All of the devices listed above, except for the cameras, are for the consumption of content.

But can we replace our current tablets with a keyboard-less version? Is it possible to design a new type of tablet, one that does not connect to a bluetooth keyboard or provide a virtual keyboard? I think the answer is a cautious 'yes', as there are several challenges.

We need keyboards on our current tablets to provide information for identity and configuration. A user name, or authentication with a server, as with the initial set-up of an Android tablet or iPad. One needs a keyboard to authenticate apps with their servers (Twitter, Facebook, Yahoo mail, etc.).

But authentication could be handled through other mechanisms. Assuming that we "solve" the authentication "problem", where else do we need keyboards?

A number of places, as it turns out. Tagging friends in Facebook. Generating content for Facebook and Twitter status updates. Recommending people on LinkedIn. Specifying transaction amounts in banking apps. (Dates for transaction, however, can be handled with calendars, which are merely custom-shaped keyboards.)

Not to mention the challenge of changing people's expectations. Moving from keyboard to keyboard-less is no small change, and many will (probably) resist.

So I think keyboards will be with us for a while.

But not necessarily forever.

Sunday, April 12, 2015

Open source takes advantage of abandoned hardware

Of the popular desktop operating systems (Windows, Mac OS, and Linux), Windows has a commanding lead. Mac OS is a distant second, and Linux -- on the desktop -- is a far distant third. Yet Linux has one advantage over Windows and Mac OS.

This advantage was made real to me when I dusted off an old 2006-era MacBook. It is in good condition, and due to Apple's design, still serviceable. Yet it ran Mac OSX 10.4 "Tiger", an operating system that Apple abandoned several years ago. Not only has Apple abandoned the operating system, they have abandoned the hardware. (The good folks at Apple would much prefer that one purchase a new device with a new operating system. That makes sense, as Apple is the in the business of selling hardware.)

Microsoft is not in the business of selling hardware (keyboards and Surface tablets are a very small part of their business) yet they also abandon operating systems and hardware. An old Dell desktop PC, sitting in the corner, runs Windows XP and cannot upgrade to a later system. Microsoft has determined that the processor, memory, and disk combination is not worthy of a later version.

So I have an Apple MacBook that I cannot upgrade to a later version of Mac OS X and a desktop PC that I cannot upgrade to a later version of Windows. While the MacBook was able to run a later version of Mac OS X, those versions are not available. The current version ("Yosemite") won't run on it. The desktop PC is "maxed out" at Windows XP.

Here is where open source has a foothold. Apple will not supply an operating system for the MacBook -- not even the original version anymore. Microsoft
will not supply and operating system for the old Dell desktop PC -- not even the original version. Yet Linux can run on both machines.

I have, in fact, replaced Mac OS X on the MacBook with Ubuntu Linux. I'm using the new combination to type this post, and it works.

Many people may casually discard old computers. Other folks, though, may revive those computers with open source software. Over time, the market share of Linux will grow, if only because Apple and Microsoft have walked away from the old hardware.

Thursday, April 9, 2015

UI stability

The years from 1990 to 2010 were the years of Windows dominance, with a stable platform for computing. Yet this platform was not immune to changes.

Microsoft made several changes to the interface of Windows. The change from Windows 3 to Windows 95 (or Windows NT) was significant. Microsoft introduced better fonts, "3-D" controls, and the "start" menu. Microsoft made more changes in Windows XP, especially in the "home" edition. Windows Vista saw more changes (to compete with Apple) and Windows 8 expanded the "start" menu to full screen with active tiles.

These changes in Windows required users to re-learn the user interface.

Microsoft is not alone. Apple, too, has made changes to the user interfaces for Mac OS and iOS. Ask a long-time user of Mac OS about scrolling, and you may get an earful about a change that reversed the direction for scroll operations.

Contrast these changes to the command line. For Windows, the command line is provided by the "Command Window" or the CMD shell. It is based heavily on the MS-DOS command line, which in turn was based on the CP/M command line, which was based on DEC operating systems. While Microsoft has added a few features over time, the basic command line remains constant. Anyone familiar with MS-DOS 2.0 would be comfortable in the latest Windows command prompt. (Not the Powershell; that is a different beast.)

For Unix or Linux, the command line depends on the shell program in use. There are several shell programs: The C Shell (csh), The Bourne Shell, and the Bourne Again Shell (bash), to name a few. They all do the same thing, each with their own habits, yet each have been consistent over the years.

System administrators and developers often favor the command line. Perhaps because it is powerful. Perhaps because it is terse. Perhaps because it requires little network bandwidth and allows for effective use over low-speed connections. But also perhaps because it has been consistent and stable over the years, and requires little or no ongoing learning.

This may be a lesson for application developers (and mobile designers).

Tuesday, April 7, 2015

Listening to your customers

Conventional wisdom is that a business should listen to its customers and deliver products or services than meet the needs of the customers. It sounds good.

Many companies follow this wisdom. Microsoft has done so. One of the biggest messages that customers tell Microsoft is that they want their existing systems to keep working. They want their e-mail and document systems (Outlook and Word) to keep working. They want their databases to keep working. They want their custom-built software to keep working. And Microsoft complies, going to great lengths to keep new versions of operating systems and applications compatible with previous versions.

Apple, in contrast, does not keep their systems compatible. Apple constantly revises the design of their hardware and software, breaking backwards compatibility. The latest Macbooks use a single connector (a USB type C connector) and drop all of the older connectors (network, power, USB type A, Thunderbolt, etc.). Apple has revised software to drop old features. (Mac OS X "Lion" removed support for 32-bit processors.) Is Apple not listening to its customers?

I believe Apple *is* listening. Apple gets a different message from customers: They want their systems to "just work" and they want their systems to be "cool". Those elements mean that the system design must change (especially to be "cool"). Apple offers new, cool systems; Microsoft offers stability.





Monday, April 6, 2015

Web services use the Unix way

Web services use a very different mindset than the typical Windows application. Windows applications are large, all-encompassing systems that contain everything they need.

Web services, in contrast, are small fractions of a system. A complete system can be composed of web services, but a web service is not a complete system.

We've seen this pattern before. Not in Windows, nor in PC-DOS or MS-DOS, but in Unix (and now Linux).

"The Art of Unix Programming" by E.S. Raymond covers the topic well. The author describes the philosophy of small, connectable programs. Each program does one thing well, and systems are built from these programs. Anyone who has worked with the Unix (or Linux) command line knows how to connect multiple programs into a larger system.

We don't need to discover the properties of good web services. We don't need to re-invent the techniques for desiging good web services. The properties and techniques already exist, in the command-line philosophy of Unix.

Thursday, April 2, 2015

Mobile operating systems break the illusion of control

The mobile operating systems iOS and Android are qualitatively different from previous operating systems like Windows, MacOS, and Linux. They break the illusion that an application has control; this illusion has been with us since the first operating systems. To understand the illusion, and how mobile operating systems are different, we must understand the origins of operating systems.

The dawn of computers saw the hardware emerge from electro-mechanical relays, but no software. The first computers were programmed with wires connecting components to other components. A computer was a custom-purpose device, designed and built to perform one calculation. Shortly after, the "programming wires" were isolated to removable boards, which allowed a "program" to be removed from the computer and stored for later use.

The first programs (in the sense we know them) were sequences of numerical values that could be loaded into a computers memory. They were a softer variant of the wired plug-board in the earlier computers. Building the sequence of numerical values was tedious; one had to understand not only the problem to be solved but also the processor instruction set. These sequences are now called "machine language".

Programmers, being what they are, developed programs to ease the chore of creating the sequences of machine language values. These programs were the first assemblers; they converted symbols into executable sequences. A programmer could work with the much easier to understand symbolic code and convert the symbols to a program when his changes were done.

Up to this point, the operation of the computer has been a simple one. Create a program, insert it into the computer, and let it run. The program instructed the computer, and the computer performed the calculations.

There were no operating systems. It was the computer and the program, alone together in the small universe of computing hardware. The program was in charge and the computer obeyed its instructions. (Blindly and literally, which meant that the programmer had to be precise and complete in his description of the operations. That aspect of programming remains with us today.)

The first operating systems were little more than loaders for programs. Programmers found that the task of loading an executable program was a chore, and programmers, being what they are, creating programs to ease that task. A loader could start with a collection of programs (usually stored in a deck of punch cards), load the first one, let it run, and then load and run the next program.

Of course, the loader was still in memory, and the loaded programs could not use that memory or overwrite the loader. If they did, the loader would be unable to continue. I imagine that the very earliest arrangement worked by an agreement: the loader would use a block of addresses and the loaded programs would use other memory but not the block dedicated to the loader. The running program was still "in charge" of the computer, but it had to honor the "loader agreement".

This notion of "being in charge" is important. It is a notion that has been maintained by operating systems -- up to mobile operating systems. More on that later.

Operating systems grew out of the concept of the loader. They became more powerful, allowing more sharing of the expensive computer. The assumed the following functions:

  • Allocation and protection of memory (you can use only what you are assigned)
  • Control of physical devices (you must request operations through the operating system)
  • Allocation of CPU (time slices)

These are the typical functions we associate with operating systems.

Over the years, we have extended operating systems and continued to use them. Yet in all of that time, from IBM's System/360 to DEC's VMS to Microsoft's Windows, the understanding is that our program (our application), once loaded, is "in control" until it exits. This is an illusion, as our application can do very little on its own. It must request all resources from the operating system, including memory. It must request all actions through the operating system, including the operating on devices (display a window on a screen, send text to a printer, save a file to disk).

This illusion persists, I believe due to the education of programmers (and system operators, and computer designers). Not merely through formal training but also through informal channels. Our phrases indicate this: "Microsoft Word runs and prints a document" or "Visual Studio builds the executable" or "IIS serves the HTML document". Our conversations reinforce the belief that the running computer is in control.

And here is where the mobile operating systems come in. Android and iOS have very clear roles for application programs, and those roles are subservient to the operating system. A program does not run in the usual sense, making requests of the operating system. Instead, the app (and I will use the term "app" to indicate that the program is different from an application) is activated by the operating system when needed. That activation sees the operating system instruct the app to perform a task and then return control to the operating system.

Mobile operating systems turn the paradigm of the "program in control" inside-out. Instead of the program making requests of the operating system, the operating system makes requests of the program. The operating system is in control, not the program.

This view is very different from our traditional view, yet it is an accurate one. Apps are not in control. Applications are not in control -- and have not been for many years.

Tuesday, March 31, 2015

Our tools shape our languages, and our programs

Our applications are shaped by many forces: budget, available talent, time, languages, and tools.

When designing applications, languages are sometimes overlooked. Yet it should be obvious to anyone who has used multiple languages. Different languages have different capabilities. C and its descendants have the notions of pointers, something absent in COBOL, Fortran, and BASIC. The notion of pointers lets one build dynamic, complex data structures; such structures are not possible in pointerless languages.

Our tools have an effect on our languages, which can have an effect on our applications.

In the Microsoft world, C# is a simple language, yet the .NET framework of classes is complex and wordy. A simple application in C# requires a significant amount of typing, and a moderately complex application requires a large amount of typing.

Languages such as Perl, Python, and Ruby require less typing. How is it that the .NET framework is so complex? The answer, I believe is in Visual Studio.

Microsoft's development environment is the result of competition and years (decades?) of development. Over that time, Microsoft has added features: debugging, test case management, design documents, and editing features. One significant editing feature is the auto-completion of typing with context-sensitive information. When I am typing a C# program and I start typing a name of a variable, Visual Studio gives me a list of existing variable names. (It also knows that the declaration of a variable is different from the use of a variable, and does not prompt me with existing names.) Visual Studio also lists methods for objects and provides templates for parameters.

This assistance from Visual Studio reduces the burden of typing. Thus, applications that use the .NET framework can appear wordy, but the effort to create them is less than it appears. (Some have argued that the assistance from Visual Studio helps reduce errors, as it suggests only valid possibilities.)

Thus, a development tool can affect our languages (and class frameworks), which in turn can affect our applications. When choosing our tools, we should be aware of their properties and how they can affect our projects.