Showing posts with label user interface. Show all posts
Showing posts with label user interface. Show all posts

Monday, January 9, 2023

After the GUI

Some time ago (perhaps five or six years ago) I watched a demo of a new version of Microsoft's Visual Studio. The new version (at the time) had a new feature: the command search box. It allowed the user to search for a command in Visual Studio. Visual Studio, like any Windows program, used menus and icons to activate commands. The problem was that Visual Studio was complex and had a lot of commands -- so many commands that the menu structure to hold them all was enormous, and searching for a command was difficult. Many times, users failed to find the command.

The command search box solved that problem. Instead of searching through menus, one could type the name of the command and Visual Studio would execute it (or maybe tell you the path to the command).

I also remember, at the time, thinking that this was not a good idea. I had the distinct impression that the command search box showed that the GUI paradigm had failed, that it worked up to a point of complexity but not beyond that point.

In one sense, I was right. The GUI paradigm does fail after a certain level of complexity.

But in another sense, I was wrong. Microsoft was right to introduce the command search box.

Microsoft has added the command search box to the online versions of Word and Excel. These command boxes work well, once you get acquainted with them. And you must get acquainted with them; some commands are available only through the command search box, and not through the traditional GUI.

Looking back, I can see the benefit of changing the user interface, and changing it in such a way as to make a new type of user interface.

The first user interface for personal computers was the command line. In the days of PC-DOS and CP/M-86, users had to type commands to invoke actions. There were some systems (such as the UCSD p-System) that used full-screen text displays as their interface, but these were rare. Most systems required the user to learn the commands and type them.

Apple's Macintosh and Microsoft's Windows used a GUI (Graphical User Interface) which provided the possible commands on the screen. Users could click on an icon to open a file, another icon to save the file, and a third icon to print the file. The icons were visible, and more importantly, they were the same across all programs. Rarely used commands were listed in menus, and one could quickly look through the menu to find a command.

Graphical User Interfaces with icons and buttons and menus worked, until they didn't. They were adequate for simple programs such as the early versions of Word and Excel, but they were difficult to use on complex programs that offered dozens (hundreds?) of commands.

The command search box addresses that problem. A program that uses the command search box, instead of displaying all possible commands in icons and buttons and menus, shows the commonly-used commands in the GUI and hides the less-used commands in the search box.

The search box is also rather intelligent. Enter a word or a phrase and the application shows a list of commands that are either what you want or close to it. It is, in a sense, a small search engine tuned to the commands for the application. As such, you don't have to remember the exact command.

This is a departure from the original concept of "show all possible actions". Some may consider it a refinement of the GUI; I think of it as a separate form of user interface.

I think that it is a separate form of interface because this concept could be applied to the traditional command line. (Command line interfaces are still around. Ask any user of Linux, or any admin of a server.) Today's command line interfaces are pretty much the same as the original ones from the 1970s, in that you must type the command from memory.

Some command shell programs now prompt you with suggestions to auto-complete a command. That's a nice enhancement. I think another enhancement could be something similar to the command search box of Microsoft Excel: a command that takes a phrase and reports matches. Such an option does not require graphics, so I think that this search-based interface is not tied to a GUI.

Command search boxes are the next step in the user interface. It follows the first two designs: command line (where you must memorize commands and type them exactly) and GUI (where you can see all of the commands in icons and menus). Command search boxes don't require every command to be visible (like in a GUI) and they don't require the user to recall each command exactly (like in a command line). They really are something new.

Now all we need is a name that is better than "command search box".

Thursday, June 23, 2022

Web pages are big

Web pages load their own user interface code.

There are two immediate consequences of this.

First, each web page must load its own libraries to implement that interface. That slows the loading of web pages. The libraries include more than code; they include stylesheets, fonts, graphics, images and lots of HTML to tie it all together. The code often includes bits to identify the type of device (desktop web browser, mobile web browser, etc.) and functions to load assets when needed ("lazy loading").

I imagine that lots of this code is duplicated from page to page on a web site, and lots of the functions are similar to corresponding functions on other web sites.

Second, each web page has its own user interface, its own "look and feel", to use a term from the 1980s.

Each web page (or perhaps more accurately, each web site) has its own appearance, and its own conventions.

Even the simple convention of "login and logout links are in the top right corner" is not all that common. Of the dozens of web sites that I frequent, many have the "login" and "logout" links in the top right corner, but many others do not. Some have the links close to the top (but not topmost) and close to the right side (but not rightmost). Some web sites bury the "login" and "logout" links in menus. Some web sites put one of the "login" and "logout" links in a menu, but leave the other on the page. Some web sites put the "login" link in the center of their welcome page. And there are other variations.

Variation in the user interface is not evil, but it is inconsistent and it increases the mental effort to visit different web sites. But what do the owners of each web site care? As long as customers come to their web site (and pay them) then the web site is working, according to the company. The fact that it is not consistent with other web sites is not a problem (for them).

Web sites have to load all of their libraries, which increases overall load time for the site. The companies running the web sites probably care little, as the cost is imposed on their customers. The attitude that many companies take is probably (I say "probably" because I have not spoken to companies about this) is that the user (the customer), if dissatisfied with load time, can purchase a faster computer or a faster internet service. The company feels no obligation to improve the experience for the customer.

* * * * *

The situation of individual, differing user interfaces is not unique. In the 1980s, prior to Microsoft Windows, PC software had different user interfaces. The word processors of the time (WordPerfect, WordStar, and even Microsoft Word which had a version for MS-DOS) each had their own "look and feel". The spreadsheets of the time (Lotus 1-2-3, Quattro Pro, and Microsoft Multiplan) each had their own user interfaces, different from each other and different from the user interfaces for word processors. Database packages (dBase, R:Base, Clipper, Condor) each had their own... you get the idea.

Windows offered consistency in the user interface. (It also offered graphics, which is what I think really sold Windows over IBM's OS/2, but that's another topic.) With Windows, programs started the same way, appeared the same way, and provided a set of common functions for opening files, printing, copying and pasting data, and more.

Windows arrived at an opportune time. Computers were fairly common, people (and companies) were using them for serious work, and applications had their various user interfaces. Windows offered consistency across applications, and reduced the effort to learn new applications. A spreadsheet was different from a word processor, but at least someone who was familiar with a word processor under Windows could perform basic operations in a spreadsheet under Windows. People, when learning new applications, could focus on those aspects that were unique to the new applications, not the common operations.

The result was that people learned to use computers more rapidly than in the earlier age of MS-DOS. Windows was sold on the reduction of effort (and therefore costs) in using computers.

* * * * *

Will we see a similar transition for the web? Will someone come along and sell a unified interface for web apps, advertising a lower cost of use? 

In a sense, we have. The apps on smart phones have a more consistent user interface than web sites. This is due to Apple's and Google's efforts, providing libraries for common UI functions and guidelines for application appearance.

But I don't see a unifying transition for web sites in traditional (desktop) browsers. Each company wants its own look and feel, its own brand presence. It doesn't care that web sites take a long time to load, and it probably doesn't care that web sites require a lot of expensive maintenance. Microsoft was able to sell Windows from a position of strength, in a market that had few options. With the web, any company can set up a web site and offer it to the world. There is no convenient choke point, and there is no company strong enough to offer a user interface that could meet the needs of the myriad web sites in existence.

Which means that we are stuck with large web pages, long download times, and different interfaces for different web sites.

Thursday, June 24, 2021

A Tale of Three Spreadsheets

I recently compared online spreadsheets. The three were Google Sheets, Microsoft Online Excel, and Apple iCloud Numbers. For this column, I will use the short names "Sheets", "Excel", and "Numbers" to refer to these online services. When I want to refer to a desktop version of an application, I will use the adjective "desktop".

Of the three, Sheets and Excel were easy for me to use, and Numbers was a disappointment.

All three online spreadsheets provide the same basic capabilities: a workbook with one or multiple sheets, each sheet having a grid of cells, each cell able to hold a number, a text value, or a formula. All three allow for formatting, letting the user select typeface, font size, italic or bold, and colors for text or background.

All three work in any of the major browsers. One can use Chrome, Edge, Firefox, or Safari, and I suspect many other browsers. I was half-expecting each to work in the company's browser (Numbers in Safari, Sheets in Chrome, and Excel in Edge) and not in other browsers. But I saw no difference in performance, and no warning messages along the lines of "Works best in ___ browser".

Excel is very similar to the desktop version. The major change is the "ribbon", Microsoft's menu system for its desktop office applications. Instead of a ribbon, Excel provides a search bar that leads you to the desired command.

A big item missing from Excel is macros. Desktop Excel allows for macros with the VBA language. VBA is built on top of COM, and in the web space, COM does not exist. Therefore, VBA does not exist for Excel. I assume it is absent from other online applications from Microsoft, and I further assume that Microsoft is working on a replacement macro language for online spreadsheets.

Numbers is, I presume, an online version of its desktop version of Numbers. I haven't used desktop Numbers, so I cannot compare the iCloud version to the desktop version. I can say that the iCloud version is a basic spreadsheet with the features that I expected.

Sheets is an online version of... no other product. Google never supplied a desktop spreadsheet. With no legacy desktop application, Google was free to design their online spreadsheet as they saw fit. Much of the design is quite similar to desktop Excel. So similar, I suspect, that in the 1980s Microsoft would have sued Google for copying the "look and feel" of desktop Excel.

All three online offerings are limited compared to desktop versions. All did some things well, some things poorly, and some things not at all. Of the three, Numbers was the most problematic for me.

Numbers relies on the mouse for many operations, and that design choice made for a frustrating experience. I understand Apple's intent -- to make things easy for those who are familiar with the desktop Numbers application -- but it did not help me. (I suspect that moving from desktop Excel to desktop Numbers is also challenging.)

Numbers is not the only spreadsheet with issues. Excel would not let me split text fields into multiple cells (a task easily performed in desktop Excel, and in Sheets). Nor would it let me create a chart with two separate columns from the spreadsheet. Excel and Numbers can handle only a single block of data; Sheets allowed me to create a chart from separate columns.

There are some differences in behavior when loading spreadsheets. Excel remembers the active cell between sessions, so when I reload a spreadsheet, the active cell is the one from the previous session. Sheets, when it loads a file, always sets the active cell to A1 of the first sheet. I suppose that either behavior can be desired (or annoying).

The differences in spreadsheets got me to thinking. Why did Microsoft choose to implement some features and not others? What was Google's motivation for an online spreadsheet? And why did Apple make an online spreadsheet?

Google's motivations are clear: They want people to use the Chrome browsers (and Chromebooks) as much as possible (to feed information to their advertising business). People rely on spreadsheets (and word processors) for a lot of their work, and a Chrome browser without a spreadsheet would be of little use to people. Therefore, Google must provide a spreadsheet that can be used within Chrome. (And a word processor, and an e-mail client, and a bunch of other applications.)

Microsoft's motivations are less clear, but I'm willing to guess: Provide examples of applications that run on Microsoft's cloud infrastructure (Azure) and provide competition to Google's online offerings.

Apple's motivations are the murkiest of the three. Apple relies little on Safari; it generates revenue from sales of hardware and services and not from advertising. It doesn't sell cloud infrastructure or services. It offers Numbers and Pages as apps for the iPhone and iPad, and as applications for Macintosh computers. I see no advantage to Apple to offer the web versions, other than to say that Apple is in the cool kids club and can do the same things Google and Microsoft can do.

What can these motivations tell us about the future of these cloud-based spreadsheets?

Google needs its online spreadsheet, so it will support it and expand it. But Google, I think, we expand it judiciously, adding features gradually. I expect that Sheets will never match Microsoft's desktop Excel, which is the result of years of competition and expansion. When competing with other desktop spreadsheets, features were a benefit, and Microsoft added many features. Now, desktop Excel is a collection of features, some of which that do not work well with others. Google doesn't have the pressure to add features, and I think Google wants a spreadsheet that offers a balance of features and ease-of-use.

Microsoft doesn't need an online spreadsheet (not in the sense that Google needs it) but it wants an online spreadsheet. It probably views Excel as a entry point to desktop Excel -- and desktop Excel's subscription fees. Therefore, Microsoft wants Excel to be capable enough for people to use, but without some features that are in desktop Excel.

Apple doesn't need an online spreadsheet, and I'm not sure that it wants an online spreadsheet (although someone inside of Apple does want it). Apple's revenue from app license fees is minimal (and often waived), so I don't see revenue as a driver for Apple. I also don't see customers starting with the online version and then moving to the desktop version; I suspect the majority of Numbers users already have Apple computers or phones and therefore already have the desktop version. And Apple doesn't have to use it as a showpiece for their cloud infrastructure. (Apple doesn't sell cloud infrastructure.) Which leaves... no reason for Numbers and Pages in the cloud.

Unless Apple is using Numbers and Pages as a way to develop its internal expertise for cloud-based apps. That could be a reason for Apple to provide an online spreadsheet. Until they get enough expertise that don't need cloud-based apps to build expertise.

If my analysis is right (and keep in mind that I am often not right) then Microsoft's and Google's online spreadsheets will be with us for some time, and Apple's online spreadsheet is in rather precarious position. Apple may decide, either tomorrow or two years from now, that it doesn't need these online apps.

Tuesday, April 14, 2015

Sans keyboard

Will the next big trend be devices without keyboards?

Not simply devices with virtual keyboards, such as those on phones and tablets, but devices without keyboards at all? Are such devices possible?

Well, of course a keyboard-less device is possible. Not only is it possible, we have had several popular devices that were used without keyboards. They include:

The pocket-watch Arguably the first wearable computing device.

The wrist-watch The early, pre-digital watches kept track of time. Some had indicators for the day of month; a few for day of week. Some included stop-watches.

Since we mentioned them, Stop-watches.

Pocket transistor radios Controlled with one dial for volume and another dial for tuning, they kept us connected to news and music.

Apple iPod

Film cameras From the Instamatic cameras that used 126 or 110 type film to Polaroid camera that yielded instant photos to SLR cameras (with dials and knobs for focus, aperture, and shutter speed).

Digital cameras Knobs and buttons, but no keyboard.

Some e-book readers (or e-readers) My Kobo e-reader lets me read books and it has no keyboard. Early Amazon.com Kindle e-readers had no keyboard.

So yes, we can have devices without keyboards.

Keyboard-less devices tend to be used to consume content. All of the devices listed above, except for the cameras, are for the consumption of content.

But can we replace our current tablets with a keyboard-less version? Is it possible to design a new type of tablet, one that does not connect to a bluetooth keyboard or provide a virtual keyboard? I think the answer is a cautious 'yes', as there are several challenges.

We need keyboards on our current tablets to provide information for identity and configuration. A user name, or authentication with a server, as with the initial set-up of an Android tablet or iPad. One needs a keyboard to authenticate apps with their servers (Twitter, Facebook, Yahoo mail, etc.).

But authentication could be handled through other mechanisms. Assuming that we "solve" the authentication "problem", where else do we need keyboards?

A number of places, as it turns out. Tagging friends in Facebook. Generating content for Facebook and Twitter status updates. Recommending people on LinkedIn. Specifying transaction amounts in banking apps. (Dates for transaction, however, can be handled with calendars, which are merely custom-shaped keyboards.)

Not to mention the challenge of changing people's expectations. Moving from keyboard to keyboard-less is no small change, and many will (probably) resist.

So I think keyboards will be with us for a while.

But not necessarily forever.

Thursday, April 9, 2015

UI stability

The years from 1990 to 2010 were the years of Windows dominance, with a stable platform for computing. Yet this platform was not immune to changes.

Microsoft made several changes to the interface of Windows. The change from Windows 3 to Windows 95 (or Windows NT) was significant. Microsoft introduced better fonts, "3-D" controls, and the "start" menu. Microsoft made more changes in Windows XP, especially in the "home" edition. Windows Vista saw more changes (to compete with Apple) and Windows 8 expanded the "start" menu to full screen with active tiles.

These changes in Windows required users to re-learn the user interface.

Microsoft is not alone. Apple, too, has made changes to the user interfaces for Mac OS and iOS. Ask a long-time user of Mac OS about scrolling, and you may get an earful about a change that reversed the direction for scroll operations.

Contrast these changes to the command line. For Windows, the command line is provided by the "Command Window" or the CMD shell. It is based heavily on the MS-DOS command line, which in turn was based on the CP/M command line, which was based on DEC operating systems. While Microsoft has added a few features over time, the basic command line remains constant. Anyone familiar with MS-DOS 2.0 would be comfortable in the latest Windows command prompt. (Not the Powershell; that is a different beast.)

For Unix or Linux, the command line depends on the shell program in use. There are several shell programs: The C Shell (csh), The Bourne Shell, and the Bourne Again Shell (bash), to name a few. They all do the same thing, each with their own habits, yet each have been consistent over the years.

System administrators and developers often favor the command line. Perhaps because it is powerful. Perhaps because it is terse. Perhaps because it requires little network bandwidth and allows for effective use over low-speed connections. But also perhaps because it has been consistent and stable over the years, and requires little or no ongoing learning.

This may be a lesson for application developers (and mobile designers).

Sunday, February 9, 2014

Mobile/cloud chips away at batch processing

Consider a particular type of PC application, one that I call "the spreadsheet app". It processes information. It accepts large quantities of data in (a spreadsheet, or multiple spreadsheets), processes the data, and then provides the results in large quantities of data (another spreadsheet with multiple pages).

In some ways, it is a design of mainframe batch processing: collect all of the input data up front, process it in one large calculation, and provide the results in one large batch.

The PC revolution was supposed to change all of that. The PC revolution was supposed to slay the batch processing beast and make data processing interactive. Yet here we are, thirty years later, still processing data in large batches.

I think that tablets (or more specifically, mobile/cloud) will succeed where the PC revolution failed.

Moving a spreadsheet app to mobile cloud is not easy. A direct port makes little sense: tablets have small screens and data entry into spreadsheets requires fine control for the selection of cells.

A "native" tablet app would take advantage of the strengths of mobile/cloud (interactive displays and fast processing on the cloud servers) and avoid the weaknesses (constrained data entry). Instead of data entry into a large grid of numbers, data must be entered in smaller units. This is possible with lots of spreadsheet apps; their data is often structured into collections (and sometimes collections of collections). Tablets can handle data entry in smaller chunks.

The shift from the spreadsheet grid to a collection of units is the same as the shift from batch processing to interactive processing. The app must accept small units of data, incorporating each smaller unit into the larger whole.

It's possible to build the entire large batch of data in this manner, and then process the batch at once, but its also possible to process each unit of data (a small batch) as it is entered.

Even if each unit of input requires a complete re-calculation of the data, that may be okay. The calculation would be performed on a server (or a set of cloud-based servers) and computing is getting faster and faster. Pushing data to cloud servers for calculations makes sense.

Tablets still have a limited size, and displaying the results may be problematic. One cannot display a raft of numbers on a tablet. (Well, one can display a raft of numbers, by using a very small typeface size. But the results would be undesirable.)

Instead of a text display, apps for tablets will (most likely) use graphical representations for data, with the ability to focus on small sections and see the underlying numbers. We could use this technique today with PC applications, but spreadsheets are limited in their abilities to present information graphically. (I've used Microsoft Excel for years, and the charts and graphic capabilities have remained static for quite a while.)

So that's what I see. Mobile/cloud apps will accept small units of data, process them on fast servers, and present the results in graphical form. The shift from batch processing to interactive processing. The PC revolution will eventually occur and liberate us from the tyranny of batch processing. Ironically, it will occur not on PCs but on tablets and cloud servers.

Wednesday, November 6, 2013

More was more, but now less is more

IBM and Microsoft built their empires with the strategy "bigger and more features". IBM mainframes, over time, became larger (in terms of processor speed and memory capacity) and included more features. Microsoft software, over time, became larger (in terms of capacity) and included more features.

It was a successful strategy. IBM and Microsoft could win any "checklist battle" which listed the features of products. For many managers, the product with the largest list of features is the safest choice. (Microsoft and IBM's reputations also helped.)

One downside of large, complicated hardware and large, complicated software is that it leads to large, complicated procedures and data sets. Many businesses have developed their operating procedures first around IBM equipment and later around Microsoft software. When developing those procedures, it was natural to, over time, increase the complexity. New business cases, new exceptions, and special circumstances all add to complexity.

Businesses are trying to leverage mobile devices (tablets and phones) and finding that their long-established applications don't "port" easily to the new devices. They are focussing on the software, but the real issue is their processes. The complex procedures behind the software are making it hard to move business to mobile devices.

The user interfaces on mobile devices limit applications to much simpler operations. Perhaps our desire for simplicity comes from the size of the screen, or the change from mouse to touch, or from the fact that we hold the devices in our hands. Regardless of the reason, we want mobile devices to have simple apps.

Complicated applications of the desktop, with drop-down menus, multiple dialogs, and oodles of options simply do not "work" on a mobile device. We saw this with early hand-held devices such as the popular Palm Pilot and the not-so-popular Microsoft PocketPC. Palm's simple operation won over the more complex Windows CE.

Simplicity is a state of mind, one that is hard to obtain. Complicated software tempts one into complicated processes (so many fonts, so many spreadsheet formula operations, ...). Mobile devices demand simplicity. With mobile, "more" may be more, but it is not better. The successful businesses will simplify their procedures and their underlying business rules (perhaps the MBA crowd will prefer the words "streamline" or "optimize") to leverage mobile devices.


Friday, May 31, 2013

The rise of the simple UI

User interfaces are about to become simpler.

This change is driven by the rise of mobile devices. The UI for mobile apps must be simpler. A cell phone has a small screen and (when needed) a virtual keyboard. The user interacts through the touchscreen, not a keyboard and mouse. Tablets, while larger and often accompanied by a real (small-form) keyboard, also interact through the touchscreen.

For years, PC applications have accumulated features and complexity. Consider the Microsoft Word and Microsoft Excel applications. Each version has introduced new features. The 2007 versions introduced the "ribbon menu", which was an adjustment to the UI to accommodate the increase.

Mobile devices force us to simplify the user interface. Indirectly, they force us to simplify applications. In the desktop world, the application with the most features was (generally) considered the best. In the mobile world, that calculation changes. Instead of selecting an application on the raw number of features, we are selecting applications on simplicity and ease of use.

It is a trend that is ironic, as the early versions of Microsoft Windows were advertised as easy to use (a common adjective was "intuitive"). Yet while "intuitive" and "easy", Windows was never designed to be simple; configuration and administration were always complex. That complexity remained even with networks and Active Directory -- the complexity was centralized but not eliminated.

Apps on mobile don't have to be simple, but simple apps are the better sellers. Simple apps fit better on the small screens. Simple apps fit better into the mobile/cloud processing model. Even games demonstrate this trend (compare "Angry Birds" against the PC games like "Doom" or even "Minesweeper").

The move to simple apps on mobile devices will flow back to web applications and PC applications. The trend of adding features will reverse. This will affect the development of applications and the use of technology in offices. Job requisitions will list user interface (UI) and user experience (UX) skills. Office workflows will become more granular. Large, enterprise systems (like ERP) will mutate into collections of apps and collections of services. This will allow mobile apps, web apps, and PC apps to access the corporate data and perform work.

Sellers of PC applications will have to simplify their current offerings. It is a change that will affect the user interface and the internal organization of their application. Such a change is non-trivial and requires some hard decisions. Some features may be dropped, others may be deferred to a future version. Every feature must be considered and placed in either the mobile client or the cloud back-end, and such decisions must account for many aspects of mobile/cloud design (network accessibility, storage, availability of data on multiple devices, among others).

Sunday, August 5, 2012

The evolution of the UI

Since the beginning of the personal computing era, we have seen different types of user interfaces. These interfaces were defined by technology. The mobile/cloud age brings us another type of user interface.

The user interfaces were:
  • Text-mode programs
  • Character-based graphic programs
  • True GUI programs
  • Web programs
Text-mode programs were the earliest of programs, run on the earliest of hardware. Sometimes run on printing terminals (Teletypes or DECwriters), they had to present output in linear form -- the hardware operated linearly, one character after another. When we weren't investigating problems with hardware, or struggling to software, we dreamed about better displays. (We had seen them in the movies, after all.)

Character-based graphic programs used the capabilities of the "more advanced" hardware such as smart terminals and even the IBM PC. We could draw screens with entry fields -- still in character mode, mind you -- and use different colors to highlight things. The best-known programs from this era would be Wordstar, WordPerfect, Visicalc, and Lotus 1-2-3.

True GUI programs came about with IBM's OS/2, Atari's GEM, and Microsoft's Windows. These were the programs that we wanted! Individual windows that could be moved and resized, fine control of the graphics, and lots of colors! Of course, such programs were only possible with the hardware and software to support them. The GUI programs needed hefty processors and powerful languages for event-driven programming.

The web started in life as a humble method of viewing (and linking) documents. It grew quickly, and web programming surpassed the simple task of documents. It went on to give us applications such as brochures, shopping sites, and eventually e-mail and word processing.

But a funny thing happened on the way to the web. We kept looking back at GUI programs. We wanted web programs to behave like desktop PC programs.

Looking back was unusual. In the transition from text-mode programs to character-based graphics, we looked forward. A few programs, usually compilers and other low-output programs, stayed in text-mode, but everything else moved to character-based graphics.

In the transition from character-based graphics to GUI, we also looked forward. We knew that the GUI was a better form of the interface. No one (well, with the exception of perhaps a few ornery folks) wanted to stay with the character-based UI.

But with the transition from desktop applications and their GUI to the web and its user interface, there was quite a lot of looking back. People invested time and money in building web applications that looked and acted like GUI programs. Microsoft went to great lengths to enable developers to build apps that ran on the web just as they had run on the desktop.

The web UI never came into its own. And it never will.

The mobile/cloud era has arrived. Smartphones, tablets, cloud processing are all available to us. Lots of folks are looking at this new creature. And it seems that lots of people are asking themselves: "How can we build mobile/cloud apps that look and behave like GUI apps?"

I believe that this is the wrong question.

The GUI was a bigger, better incarnation of the character-based UI. Anything the former could do, the latter could do. And prettier. It was a nice, simple progression.

Improvements rarely follow nice simple progressions. Changes in technology are chaotic, with people thinking all sorts of new ideas in all sorts of places. The web is not a bigger, better PC and its user interface was not a bigger, better desktop GUI. Mobile/cloud computing is not a bigger, better web, and its interface is not a bigger, better web interface. The interface for mobile/cloud shares many aspects with the web UI, and some aspects with the desktop GUI, but they have their unique advantages.

To be successful,  identify the differences and leverage them in your organization.

Sunday, July 24, 2011

The tablet revolution

When Apple introduced the iPad, I was not impressed.

I was comparing the iPad to e-readers, specifically the Kindle and the Nook. And for reading, I think that the Kindle and other e-readers are superior to the iPad.

But as a general computation device, the iPad (or tablets in general) are superior not only to e-readers but also to traditional desktop PCs and laptop PCs. Significantly superior. Superior enough to cause a change in the ideas about computing hardware.

The iPad and tablets (and iPods and cell phones) use touchscreens, something that traditional computers have avoided. The touchscreen experience is very different from the screen-and-mouse experience. The touchscreen experience is more intimate and more immediate; the mouse experience is clunky. Why should I have to drag the mouse pointer all the way over to a scroll bar when I can simply reach out the drag the screen?

Apple has, once again, defined the modern computing interface. Apple defined the "mouse" UI with the Lisa and then the Macintosh computers, stealing a bunch of ideas from Xerox (who in turn had lifted them from Doug Englebart).

The mouse interface was introduced in 1983, and it took a while to become popular. Once it became the standard, it was *the* standard way to interact with computers. The introduction of the iPhone/iPod/iPad interface set a new standard, one that is being adopted quickly. Apple is moving the interface to the newest version of OSX (the "Lion" release) and Microsoft is doing the same thing with its "Metro" interface for Windows 8.

The new interface expects a touchscreen. While some folks may try to fudge the interface with a plain display and a mouse, I believe that we will see a fairly rapid conversion to touchscreens.

Converting the hardware is the smaller of the two problems. The bigger problem is the software. Our current software is designed for the mouse interface, not the touch interface. Apple and Microsoft may craft solutions that let older (soon to be derided as legacy) apps to run in the new environment, but there will be some apps that fail to make the transition.

The conversion of software will also give new players an opportunity to take market share. We may see Microsoft lose its dominant position with Office.

I expect two, parallel tracks of acceptance: the home user and the business user. Home users will adopt the new UI quickly -- they have already done so on their cell phones, so changing the operating system to look like a cell phone is probably viewed as an improvement.

Business users, on the other hand, will face a number of challenges. First will be the cost of upgrading equipment with touchscreens. Related to that will be the training issues (probably minimal) and the politics associated with the distribution of the new equipment. If a company must roll out new equipment in phases (perhaps over several years) there will be squabblings over the selection of employees to get the new hardware.

Businesses also have to integrate the new hardware and software into their organization. New hardware can be adopted quickly; new software takes longer. The support teams must learn the software and the methods for resolving problems. The new software must be configured to confirm to existing standards for security, disaster recovery, and data retention. New versions of apps must be acquired and rolled out -- but only to folks with the new equipment.

The fate of developers is hard to predict. The new user interfaces have proven themselves in the consumer space. I suspect that they can work in the business space. I'm unsure of their soundness for developers. Our programming environments, tools, and even languages are designed for keyboards, not swipes on a screen. What kinds of computers will programmers use?

One possibility is that developers will use tradition-style PCs, with tradition keyboards and traditional operating systems. But this will put PCs into a niche market (developers only) and drive the prices up. Developers have for a long time been riding on the wave of popular (low-priced commodity) equipment; I'm not sure how they will adapt to a premium market.

Another possibility, albeit small, is that developers will develop new languages that fit into the new user experience. This is not precedented: BASIC was designed to fit into timesharing systems and Visual Basic was designed for programming in Windows.

Either way, it will be interesting.