Sunday, March 31, 2013

Windows fans see the glass half-empty

Microsoft has introduced Windows 8 and its companion Windows RT. The reaction from a number of Windows fans has been less than positive. Consider these articles:

From Infoworld: Forget about Windows 'Blue' -- stay focused on Windows 7

From InformationWeek: Tell Me Again: Why Rush Into Windows 8?

From Byte (a property of InformationWeek): Windows Blue's Got Me Down and Windows Blue/9: No Desktop? No Way!

A few observations:

Some reviews are fair criticisms, others are nostalgic almost-rants: Windows 8 is not a perfect product, nor is Windows RT, nor is the Surface tablet. Some reviews point out their weak areas: battery life, responsive to touch (or not), and consistency of user experience. Other reviews focus on the feature lost: the "Start" button and plug-ins for Internet Explorer (IE).

This opinion is not universal: These magazines are long-time fans of PC computing. In contrast, Dr. Dobbs is neutral about Windows 8 and PC Week has published several positive articles.

This negativity is new (for Windows): Previous releases of Windows have been met with approval from almost all Windows fans.

A reluctance to change: The disapproving users want Windows to remain the way it is. They want the "Start" button. They want to "boot to the desktop".

The Windows user base is not in agreement about the new Windows 8 offering. This is not a bad thing: A collection as large as the Windows user base will most likely contain diversity of opinions.

The negativity in the user base is, I believe, a new phenomenon in the Microsoft community. Previous changes in technology (Windows 95, Windows NT, the .NET platform, the C# programming language) were greeted with cheers. People immediately looked at the new capabilities in these technologies.

(OK, I will admit that Windows Vista was greeted with raspberries. But its problems were many and complaints were legitimate. Vista lacked drivers, demanded hardware, and offered few obvious improvements beyond a pretty desktop.)

The reluctance to change is, perhaps, the most significant of these observations. Microsoft supporters have long been willing to try new things, and often looked at new tech with envy. Microsoft may have built an empire, but the programmers were still in the Rebel Alliance -- scrappy, inventive, and risk-taking.

One can put forward a number of theories for this conservative shift in the fans. Most obvious is that the fans have built small kingdoms of their own, and the new tech threatens their standing in the empire.

A slightly uglier theory poses that Microsoft fans have aged, and the older versions of themselves are "too old for this sort of thing". (Yet Obi-wan Kenobi did a pretty good job, in spite of that claim.)

I have two pet theories:

Theory one: The Microsoft fans were surprised by the speed of the changes. They were not expecting the large change from desktop to mobile/cloud that is Windows RT and Azure. Being the emotional creatures that we humans are, they are reacting with fear and anger.

Theory two: The Microsoft fans are angry with the deprecation (or perceived deprecation) of Microsoft technologies such as Silverlight, IE plug-ins, direct access to hardware, and self-administration of systems. The loss of these (and other) technologies means that much hard-won knowledge is now worthless, and new knowledge must be gained.

I don't know which of these theories is correct. In a sense, it doesn't matter, because I have another theory.

The reason behind the negative thinking is not important. The negative thinking is the important thing. And I theorize that the people with the negative reviews of Windows 8, Windows RT, the Metro interface, and Azure will accomplish very little with those technologies. I theorize that the people asking for the "Start" button will stay with Windows 7 and its technologies.

I further predict that it will be the people who point at Metro and say "hey, this is cool!" will be the folks who deliver interesting apps and services for Windows 8.

Of the two groups, I prefer to work with the "hey this is cool!" people.

Thursday, March 28, 2013

Mobile apps are (mostly) games, and that changes the industry

If you look at the spending on mobile apps, the dollars are weighted towards games.

These figures are skewed, or at least distributed the way they are, due to the nature of users and the state of mobile apps. Mobile devices tend to be used by individuals, not corporations. Yes, corporations purchase lots of Blackberry devices, iPhones, and Android phones. But individuals purchase the larger number. Why wouldn't people purchase games for their own devices? And why would a person purchase something boring like a word processor?

This distribution may change in the future, but I'm thinking that games will retain the largest portion of mobile app revenue. If they do, it may have effects on the development industry.

The last big changes in the focus of the software development industry may have been with the PC revolution. That revolution gave us computers small enough to be used by individuals, and those computers spurred the advance of development tools: editors, compilers, debuggers, and IDEs. The IDE was a significant advance over the previous approach of separate tools.

Before PCs, computers were expensive resources that had to be shared judiciously. Applications were designed for large projects that provided return on investment: ballistics calculations, statistical analysis, accounting, inventory, and payroll. The PC revolution also lead to the rise of word processors and spreadsheets. Individuals become much more productive in the corporation. Networks and e-mail gave us additional productivity.

Throughout the PC revolution, the focus was on corporate applications. Word processors, spreadsheets, and databases are clearly in the commercial (and government) domain. Compilers and debuggers are tools for programmers, a specific industry (and a very small number of hobbyists). Yes, there were games and there were personal digital assistants. But the money was in the office.

With mobile, the money has shifted to games. One can argue that the money was shifting to games without mobile devices: Xbox, PS2, and Wii are all successful (and profitable) platforms. Perhaps the shift was occurring and mobile happened to come along at the right time.

Game applications are definitely not for corporations. Games are for individuals, or groups of game-playing individuals.

With the shift of money from office applications to games, how will the development industry change? When people are the primary client, and games the primary type of application, will the top developers  focus their effort on game development? Will non-game corporations struggle to hire developers, or get only the second-tier programmers?

Will a games-driven software industry resemble Hollywood and the movie industry? Games and movies are both entertainment, not productivity tools. Some movies have games. (And some games now have movies.)

Tuesday, March 26, 2013

Linux is a parasite

Linux is, to put it harshly, a parasite.

I like Linux. I use Linux. I encourage people to use it.

But I have to admit that Linux is successful because of the success of the IBM PC, and successful because the IBM PC was an open system.

When IBM released the PC, it documented the device and made it accessible to others. In contrast to its larger systems, which were documented less thoroughly, the PC documentation described almost every aspect of the hardware.

(This documentation may have been the result of the consent decree, the result of anti-trust litigation brought by the US Justice Department. Regardless of the source, the documentation was there.)

The openness of the PC made it possible for others to build accessory cards and eventually even clone PCs. (Those clone PCs, made first by Compaq and later by many others, did require some litigation of their own.)

The operating system, PC-DOS, was not made open. But enough of the hardware was defined to allow different people to create operating systems for the PC. In addition to Microsoft's PC-DOS, Digital Research produced a copy of CP/M-86 for the PC, SofTech produced the UCSD p-System, and I'm pretty sure that there was a copy of Forth for the PC.

Which brings us to Linux.

Linux, like the operating systems before it, runs on the PC architecture.

I realize that the PC of today is quite different from the original PC. (So different that none of the original PC hardware will work with today's PCs!) Yet the architecture is close enough, and the designs are open enough, that a "foreign" operating system (one designed by someone other than the PC designer) can run on the PC.

Linux has succeeded in the world, running on many devices. Lots of these devices are PCs. ("PC" in the general sense, not the specific model made by IBM.) It is easy to install Linux on an older PC, as an experiment or as a way to eke additional use out of a device that cannot run the latest version of Windows.

A few manufacturers have attempted to bundle Linux with hardware, with little success. Just about every copy of Linux on a PC has been installed "after-market": after the PC was sold and used for some period of time. (Most of my Linux PCs had Windows installed by the manufacturer, and I had to install Linux.)

Linux is taking advantage of the openness of PC hardware. In that sense, it is a parasite. The phrase may be a bit harsh, but I think it is accurate. You may not agree.

Regardless of your feelings about the term "parasite", the changes in hardware threaten Linux. Newer hardware is less open than the IBM PC. The Surface tablets from Microsoft do not use the standard BIOS and will not boot Linux. (Well, not without some hacking,) New PCs use the UEFI loader which guards against malware by checking signatures on boot images. The Apple iPhone and iPad devices use similar technology to boot only iOS.

Hardware is becoming closed. These new devices make it difficult, not easy, to load a "foreign" operating system.

I think Linux will continue to exist. I think that some number of open, old-style PCs will continue to be made (for several years). Linux will continue to exist in the form of Android.

But the new world of closed hardware is definitely a challenge for Linux. It may have to become something more than an "after-market" option for PCs.

Sunday, March 24, 2013

Your New New Thing will one day become The New Old Thing

In IT, we've had New Things for years. Decades. New Things are the cool products and technologies that the alpha geeks are using. They are the products that appear in the trade magazines, the products that get reviews.

But you cannot have New Things without Old Things. If New Things are the things used by the alpha geeks, Old Things are the things used by the rest of us. They are the products that support the legacy systems. They are the products that are getting the job done (however clumsily).

When I started in IT, microcomputers were the New Thing. Apple II microcomputers ("Apple ][" for the purists), CP/M, IBM PCs running PC-DOS, word processors, spreadsheets, databases (in the form of dBase II), and the programming languages BASIC, Pascal, and C. The Old Things were IBM mainframes, tape drives, batch jobs, and COBOL.

I wanted to work on the New Things. I most emphatically wanted to avoid the Old Things.

Of course, the Old Things from that time were, at some earlier time, New Things. The IBM 360/370 processors were New Things, compared to the earlier 704, 709, and 1401 processors. COBOL was a New Thing compared to assembly language.

The IT industry is, in part, devoted to building New Things and constantly demoting technology to Old Thing status.

Just as COBOL slid from New Thing to Old Thing, so did those early PC technologies. PC-DOS and its sibling MS-DOS became Old Things to the New Thing of Microsoft Windows. The Intel 8088 processor became an Old Thing compared to the Intel 80286, which in turn became Old to the 80386, which in its turn became Old to the New Thing of the Intel Pentium processor.

The slide from New Thing to Old Thing is not limited to hardware. It happens to programming languages, too. Sun made C++ and Old Thing by introducing Java. Microsoft tried to make Java an Old Thing by introducing C# and .NET. While Microsoft may have failed in that attempt, Python and Ruby have succeeded at making Java an Old Thing.

The problem with building systems on any technology is that the technology will one day become and Old Thing. That in itself is not a problem, since the system will continue to work. If all components of the system remain in place, it will work with the same performance and reliability as its first day of operation.

But systems rarely keep all components in place. Hardware is replaced. Operating systems are upgraded. Peripheral devices are exchanged for newer models. Compilers are updated to new standards. And they key "component" in a system is often changed: the people who write, maintain, and operate the system come and go.

These changes stress the system and can disrupt it. A faster processor can change the timing of certain sections of code, and these changes can break the interaction with devices and other systems. A new version of the operating system can provide additional checks and detect invalid operations; some programs rely on quirky behaviors of the processor or operating system and break when the quirks are fixed.

People are a big challenge. Programmers have free will, and can choose to work on a system or they can choose to work somewhere else. To get programmers to work on your system, you have to bribe them with wages and benefits. Programmers have various tastes for technology, and some prefer to work on New Things and other prefer to work on Old Things. I don't know that either is better, but I tend to believe that the programmers pursuing the New Things are the ones with more initiative. (Some may argue that the programmers pursuing New Things are unreliable and ready to leave for an even Newer Thing, and programmers who enjoy the Old Thing are more stable. It is not an unconvincing argument.)

Early in its life, your system is a New Thing, and therefore attractive to certain programmers. But it does not remain a New Thing forever. It eventually matures into an Old Thing, and when it does the set of programmers that it attracts also changes. The interested programmers are those who like the Old Thing; the programmers pursuing New Things are off, um, pursuing New Things.

In the long term, the system graduates into a Very Old Thing and is of interest to a small set of programmers who enjoy working on Esoteric Curiosities. But these programmers come with Very Expensive Expectations for pay.

We often start projects with the idea of using New Thing technologies. This is an acceptable practice. But we too often believe that our systems and their technologies remain New Things. This is a delusion. Our systems will always change from New Things to Old Things. We should plan for that change and manage our system (and our teams) with that expectation.

Saturday, March 23, 2013

Microsoft was nicer than Google

Google recently announced that they will be terminating their "Google Reader" service. The announcement drew a fair amount of attention.

The termination of Google Reader shows us that Microsoft was much better than Google. The reaction from "the rest of us" shows us that we have certain expectations of software vendors.

Really.

Let's start with Microsoft.

Microsoft has, over the years, offered many products. The list includes operating systems (MS-DOS and Windows), languages (BASIC, Visual Basic, FORTRAN, COBOL, C, C++, C#, F#, and even Pascal), office tools (Word, Multiplan, Excel, Powerpoint, Access, Project), games, databases, and more.

It's an impressive list. What's more impressive is the lifetime of most of those offerings.

Microsoft offered MS-DOS from 1981 until, um, some time in the 1990s when Windows 95 was released. It offered Windows (in one form or another from the mid 1980s until today (and it keeps offering it). Microsoft's BASIC has a longer history than MS-DOS, starting in the late 1970s and continuing to today. These products have been continuously offered to customers.

Now, I recognize that the products changed over time. MS-DOS grew over time, adding features and capabilities. Windows also grew. BASIC had significant changes, especially in its "Visual Basic" stages.

Microsoft may have changed its products, but it (usually) provided a path forward. MS-DOS 2.0 was replaced by MS-DOS 3.1, which in turn was replaced by MS-DOS 3.3. Windows 3.1 was replaced by Windows 95. BASIC was replaced by Visual Basic (and there were several of those), and Visual Basic 6 was replaced by VB.NET.

Some replacements were easy, and some were difficult. But they were there.

Yes, I know that some products were withdrawn with no replacement. The IronPython and IronRuby projects were terminated. The Visual J# compiler has faded into oblivion. There was no successor for Microsoft "Bob". You can add your favorite discontinued product to this list.

All in all, Microsoft has been very good at providing successor products. Perhaps this is because of the revenue that licenses provide. When Microsoft discontinues a product, it wants people to pay for a new product. What better way to keep customers than to offer a new version?

Now let's look at Google.

Google's advertising-driven revenue provides different incentives. Revenue is not generated by users (for most products). Instead, revenue comes from advertising. That advertising revenue powers the development and support of products.

Some Google products are experiments, explorations of markets and possibly technology. (Google's App Engine comes to mind as an exploration of cloud computing.)

If a product is not performing (insufficient advertising revenue), then the logical decision is to replace it with a new platform for advertising. But that new platform does not have to offer anything close to the features of the prior product.

I suspect that the outrage at Google's decision to terminate Reader was caused in part by surprise. We, the users of software, expected Google to act like Microsoft. When they did not, when Google acted in a way that varied from our expectations, we became angry.

Which is ironic, as a lot of us always cheered Google for *not* acting like Microsoft.

Wednesday, March 20, 2013

Moving to mobile/cloud requires project management

A few years ago (perhaps less than ten), the term "SOA" appeared in the tech scene. "Service Oriented Architecture" was discussed, examined, and eventually dropped from the world of tech. (Or at least the Serious Peoples' world of tech.) People derided it with clever sayings like "SOA is DOA".

SOA defined software as a set of services, and a system as a collection of connected services. It was a reasonable approach to system design, but very different from the designs for desktop and web applications. For desktop PC applications it was overkill and for web applications it was nice but not necessary.

The change to SOA design entailed a large effort, one that was not easily justified. The rejection of SOA is quite understandable: there is no payback, no return on investment.

Until we get to mobile/cloud systems.

Mobile/cloud apps, in contrast to desktop and web apps, require SOA. The small UI app on a cell phone or tablet presents the data it receives from services running on servers. There is no other way to build mobile apps. From Twitter to Facebook, from Yahoo Mail to Yelp, mobile apps are designed with SOA.

Building new apps for mobile/cloud with SOA designs was easy to justify. A new app has no legacy code, no existing design, no political turf that people or groups try to protect.

Converting existing applications is much harder. The change from non-SOA to SOA design is significant, and requires a lot of thought. Moreover, it can often require a reorganization of responsibilities, changing the political landscape. Which groups are providing which services? Who is negotiating and measuring the service level agreements (SLAs)? Who is coordinating the consolidation of redundant services?

These reorganizations, if not anticipated, can increase the management effort for the simplest of conversion projects. (Even anticipated reorganizations can increase the management effort.)

Changing the organization is not a technology problem. Moving from non-SOA (desktop PC apps) to SOA (mobile/cloud apps) is in part a technology problem and in part a management problem. You need to address both sides for a successful project.

Sunday, March 17, 2013

The next '2K' problem

It's been over thirteen years since the much-hyped 'Y2K' problem. We prepared for the great problem of two-digit dates and the rollover effect of the year 2000.

Now, another problem faces us. A "2K" problem, although different from the Y2K problem of thirteen years ago.

That problem is the "Windows 2000" problem.

While most folks have replaced their PCs and upgraded their operating systems, there are quite a few computers running Windows 2000 and Windows XP. I call these the "W2K" and "WXP" PCs. But these are not the typical office PCs -- these are hidden, or at least disguised.

Some of these "disguised" W2K and WXP PCs exist as:
  • Automatic teller machines
  • Ticket vending kiosks
  • Information kiosks
  • Restaurant management systems
  • Patient scheduling and billing systems for doctors and dentists
The problem is compounded by the owners of these systems. The owners (banks, transit agencies, restaurateurs, and doctors) do not think of these systems as PCs that need to be upgraded. They think of them as appliances, akin to a microwave oven or a refrigerator. Expensive appliances, perhaps, but appliances. Most owners don't see them as needing to be upgraded.

Many of these systems are sold as turnkey systems. Instead of purchasing the software and installing it on separately purchased PCs, the hardware and software are sold as a package. This reinforces the notion of appliance.

The owners of these systems have forgotten (if they ever knew) that these "appliance" systems are PCs running Windows. They will be in for a rude surprise when they find out.

When will they learn that their reliable, use-it-every-day, system must be replaced?

For some, the notice will come early. Some vendors will actively approach their customers and recommend new versions (complete with a newer version of Windows and the hardware to support it). Such an upgrade is not cheap, however, and many customers may balk.

Other users will learn when their system fails. The failure may be caused by hardware (a drive stops working) or software (their new printer doesn't have a driver for Windows 2000) or by a virus (Microsoft will stop sending out security updates next year).

This W2K/WXP problem is unlike the Y2K problem in that failures will happen over time, not all at once. But it is worse than the Y2K problem because people are not expecting it, and are not prepared.

The lesson here is that businesses are built on many services, and a well-run business must be aware of them, and aware of the costs and the reliability. We have stable and regulated services for electricity, water, telephone, and internet services.

Automated systems are another form of service. Less regulated, and perhaps less stable. An upgrade to a new PC running Windows 7 may cost too much for the corner restaurant. That is an ugly way to lose a business.

Tuesday, March 5, 2013

Twin debugging

Are debuggers programmable?

From time to time, I run two instances of the debugger. I want to step through two instances of a program, feeding each program slightly different data. I want to identify the point at which the code execution differs between the two sets of input data.

Each instance of the debugger has a copy of a program under development. I feed the two instances of the debugged program the two sets of data, and slowly work my way though the code.

Being non-programmable, I must perform every operation manually. I must step through the code on each debugger, often alternating between the two. Sometimes I can place breakpoints at strategic points in the code -- often guesses. The operation is mind-numbing. Sometimes it is so mind-numbing that I miss the important point of the execution: when the code sequences differ.

What I want is a tandem debugger, kind of like a tandem bicycle. I want a mega-debugger that can load two instances of a program, allow me to start both instances and feed them data, and monitor the execution of the program. When the code sequence differs, I want the mega-debugger to stop and inform me.

I call this "twin debugging", or "dual debugging". I find that I need this often, especially when debugging the data for programs or when exploring unknown code.

The problem is that the debugger is not, generally, a programmable thing. All too often, debuggers are enclosed by the IDE.

A quick search of the internet shows that there are some programmable debuggers, but no one seems to be using two instances of the same program. (Or two slightly different versions of the same program, which is a slightly trickier challenge.)

Monday, March 4, 2013

A toaster for both sides

Companies have created procedures to minimize the cost of customer interaction, and in doing so they have minimized person-to-person interactions. From automated telephone response systems to web pages, companies have efficiently automated their interaction with customers.

The drive to reduce interaction has lead to the automation of interactions. In a sense, companies now sell products but not service. Even services such as cell phone or cable TV are reduced to interaction-less transactions, with web sites to handle billing and configuration. Interaction is handled not person-to-person but person-to-computer.

It's not a bad arrangement. As a customer, I can log on to my bank's web site (or the cable TV web site, or the phone web site) and do just about everything I need to do.

In a sense, every service has become a toaster. I can walk into a store, buy a toaster, take it home, and use it, all without the assistance of a salesperson. The toaster manufacturer does not need to provide support, and I don't need it either.

So I was surprised when the New York Times sent me an e-mail in response to my review of their Android app, and asked me to contact them. (After installing the app, I had significant issues that prevented me from reading content. My review stated as much, and my evaluation was the lowest possible.)

My thought at this proposal (for interaction) was: "um, no".

The e-mail was sent from a 'noreply' e-mail address. The irony of such an e-mail asking for a reply is, at best, amusing. (To be honest, the e-mail provided a different e-mail address for responses. Which is in itself an odd arrangement: Using an e-mail address to ask for a response to another e-mail address. Why not simply use one address?)

But the larger issue is this: After years of training me to expect no interactions with a human, I no longer want them. Some of this is due, I believe, to my introvert nature. Yet some is due to the complete automation of interactions. I can sign up for a bank account without interacting with a person. I can pay bills. I can shop for goods. I can read the news and listen to podcasts. (Listening to a recorded program does not count as interaction.)

I have even stopped answering my land-line phone. I let the answering machine pick up and deal with the telemarketers. (My friends and family know to call my cell phone, which I answer when the caller is known to my address book.)

I have not only succumbed to the push for automated (efficient) interactions, I embrace them.

When a company goes "off program" and tries to interact with me, I am uncomfortable. I don't want the effort of talking with a stranger.

To some extent, the issue is one of power and ego. Who in the vendor/supplier relationship decides on the amount of interaction? How much say do I have?

It is also an issue of control. A voice phone call requires both parties to be available at the same time, and to block off other distractions. (Instant messaging imposes the "same time" requirements but eases the distractions mandate.) Shifting from automated interactions to real-time interactions places a demand on both parties.

In short, the New York Times support team wants to convert the toaster into a product that requires interaction. Which is not what I want. I want the toaster. I want the New York Times app to work, without the need to talk to a salesman or a support tech.

(To be fair, a new version of the app did resolve the problem. I am now using the app with no issues.)

Companies should think about the interaction for their products. Some might think that increasing the level of interaction will always be welcome, possibly on the theory that customers like attention. This may be true for some customers, but I know at least one customer who wants to keep interaction at the current level.

Saturday, March 2, 2013

Any keyboard you like

For a programmer, the most important aspect of a computer may be the keyboard. It is through the keyboard that we write code, that we control the text editor (or IDE), and issue most commands.

Being of a certain age, my first experience with keyboards was not with a computer but with a typewriter. It was my parents' portable, manual typewriter; I have forgotten the brand. It was hard to use and it smelled of ink and machine oil. Yet it was a fun introduction to the keyboard.

Typewriters were fun, and computers were more fun. The keyboards were more modern, and had more keys (some of which made little sense to me).

I have used several keyboards, and the most enjoyable were the DEC keyboards. DEC keyboards were sleek and sophisticated compared to the other keyboards (Teletype ASR-33, Lear-Siegler ADM-3A, and IBM 3270). I also enjoyed the Zenith Z-100 keyboard (sculpted like an IBM Selectric typewriter) and the IBM Type M keyboard.

Typing on a good keyboard is a joy. Typing on a mediocre keyboard is not.

Sadly, today's PCs cannot use these venerable keyboards. Desktop PCs want to talk to a keyboard through USB, and tablets want bluetooth.

Yet all is not lost. Virtual keyboards may help.

Not the on-screen virtual keyboards of smart phones and tablets, but a different form of virtual keyboard. A keyboard that is drawn (usually with lasers) on a surface, with an accompanying scanner to detect "keypresses".

It strikes me that these keyboards can be used on a variety of surfaces. I'm hoping that some will be programmable (or at least configurable) so that I can create my own layout. (For example, I want the "Control" key on the ASDF home row.) I also have preferences for the arrow, HOME, and END keys. A virtual keyboard should allow for re-positioning of the keys.

Re-positioning the keys is nice, but it doesn't let me use the old keyboard. The surface is still a flat, unyielding surface with no feedback.

But do scanners really care about a flat surface? Can they be used on a lumpy surface? (I'm sure that some are quite fussy, and require a flat surface. But perhaps some are less fussy.)

If a virtual keyboard can be used on a flat surface, and I can re-program the key layouts... then perhaps I can configure the virtual keyboard to emulate an old-style keyboard (say, the DEC VT-52). And perhaps I can use the virtual keyboard on a lumpy surface... say, a DEC VT-52 keyboard.

Such an arrangement would let me use any keyboard I wanted with my computer. The virtual keyboard would do the work, and wouldn't care that I happened to rest my fingers on an old, outdated keyboard.

I would like that arrangement. It would give me the layout and feel of the keyboard of my choice. I wouldn't have to compromise with the current set of keyboards. All I need is a programmable virtual keyboard and a real keyboard that I enjoy.

Now, where is that old Zenith Z-100?