Wednesday, January 30, 2013

Overfishing your ecosystem

Vendors must execute their business carefully. Is the business providing a platform for others? A turn-key system? A multi-platform application? Doing one of these things well is difficult. Doing all of them well is much harder. It is easy to grow one sector at the expense of another.

Microsoft fell into this trap.

Early in its history, Microsoft was the provider of BASIC. The BASIC-80 interpreter was the best, and everyone considered it the standard. Computer manufacturers purchased the rights to put Microsoft's BASIC on their computers. Apple, Commodore, and even IBM.

With the introduction of the PC, Microsoft became not only a supplier of languages but a supplier of operating systems. Shortly thereafter, Microsoft offered application software: the first version of Word and a spreadsheet called Multiplan.

Microsoft was successful. Too successful, one might argue. MS-DOS become the dominant operating system, overpowering CP/M-86, the UCSD p-System, and even Microsoft's Xenix. When PC hardware was capable of supporting graphic environments, Microsoft introduced Windows and out-performed Atari GEM, Desqview, and even IBM's OS/2.

Microsoft's success was not limited to operating systems. The Windows versions of Word and Excel (the replacement for Multiplan) drove out the popular packages Wordstar, Wordperfect, Lotus 1-2-3, and Quattro Pro.

By the year 2000, Microsoft held dominance in a number of markets:

Operating systems The licensing arrangements with PC manufacturers ensured the dominance of PC-DOS and Windows. (The licenses demanded royalties for every PC shipped, with or without Windows.)

Networking software Microsoft bundled network software into Windows, destroying the market for add-on products like Novell Netware.

Office applications Microsoft had Word, Excel, and Powerpoint. Microsoft products stored files in proprietary, hard-to-decode formats. Their products were good at importing documents from other vendors but not so good at exporting them.

Later, Microsoft introduced Outlook for e-mail and scheduling.

Language compilers and IDEs Visual Studio was the dominant package. Competitors Borland and Symantec exited the business. Even smaller vendors such as Nu-mega and Rogue Wave left the field.

Microsoft's efforts did not end there.

Web browser Microsoft fought off the threat from Netscape and made Internet Explorer the dominant browser.

Database Microsoft introduced MS-Access as a low-end database (and purchased Foxpro) and later introduced SQL Server for the high end. (Competition at that level remains, with IBM's DB2 and Oracle's offerings.)

Project Management Microsoft introduced Project and took the market from Harvard Project Manager.

The problem for Microsoft, as I see it, is that Microsoft overfished their ecosystem. Their platform business was successful. Their application business was successful. But those successes came at the cost of the developers of large products. With Microsoft's ability to move into any market, few wanted to develop large applications or solutions. Why spend the time and effort when success will draw the attention of the big M?

I think that it is no coincidence that the new "big apps" of Facebook and Twitter took root in the web, away from Microsoft's empire of Windows.

I also think that the "overfishing of the Microsoft ecosystem" lead to the rise of Apple. I see several factors:

People were angry with Microsoft. They were tired of losing the battle. Not just vendors, but users. We users were tired of committing to a non-Microsoft product, implementing it, learning it, and adapting it to our business, only to find that Microsoft would roll out a competing product and take over the market (forcing us to change to the Microsoft product). Enough force migrations leads to resentment.

Microsoft become complacent. They became used to the idea of the Microsoft empire, and did not see a need to compete. Microsoft let Visual SourceSafe languish for years, and developed the successor product (TFS) only after a number of vendors started introducing new technologies and capturing market share.

Microsoft chose conservative, selfish visions of the future They enhanced products with features but not benefits for users. Windows Vista looked pretty, but offered little in the way of direct business benefits. The Microsoft Office "ribbon" interface provided questionable benefit at a high cost. New versions of Visual Studio offered modest improvements. The "advances" offered by Microsoft were designed to benefit Microsoft more than the customer.

Now, Microsoft has the challenge of re-vitalizing their ecosystem. Can they woo back developers (and companies) for the new Windows Store? After years of abuse, will people want to play in the Microsoft space?

Oddly enough, I think that the answer is "yes". I think that a lot of developers are wary of Apple and unsure of Google and Android. I think Microsoft can be successful with its new platform.

But they will have to play nicely with others.


Wednesday, January 23, 2013

What you need for analytics

The next fad seems to be analytics. Some might call it "Big Data". Some might argue that the two are not the same. (They're right, the two are different.)

The use of analytics (or Big Data, the argument is the same), requires several things. You need more than just Hadoop and someone who knows the Map-Reduce pattern.

Here's a short list of things to consider:

Your data You need a way to collect your data. You need people who know where your data resides, the format of your data, and the frequency at which it changes.

Your business Most organizations have data that encodes information in specific forms. You need people who understand your business and who can interpret your data.

Your management style Different organizations have different styles of management. You need people who can prepare the data in formats and on frequencies that make sense to your decision-makers. You also need the tools to present it. Those tools can be the analytics software or they can be separate tools and presentation mechanisms, like a printed report.

Resources Analytics, despite the vendor promises, does not happen "by itself". You need people, computers, storage, networks, and software to make it happen.

An open mind Once you have all of the above, you can start using analytics. Are you prepared to benefit from it?

Persistence and patience Analytics systems must be tuned to provide the information you can use. The charts and graphs that contain the really useful results are often not the ones we pick at the start. It is only after we "play with the data" do we identify the pertinent analyses.

A while back, I worked on a project to analyze source code. It was not the typical project for analytics (or Big Data) but we had a lot of code and it was big-ish. We dedicated resources and started with simple analyses (lines of code, lines of non-comment source code). As we developed our knowledge, we changed the analysis. We shifted from simple lines-of-code to complexity, and then duplicate code, and finally to "churn" of the code base and changes to code.

Each of these phases took resources. We needed processing power, storage, and network access. We needed people to code some very specific parsers for our data (the source code) and some custom data storage techniques. (NoSQL was not available at the time.)

Each of these phases took time. We were feeling our way in the dark, which is frequently the case with analytics projects. (You don't know what you're looking for until you find it.) In our case, the useful bit of information was lines of code changed, which turned out to be a consistent predictor for defect counts. (We found a steady pace of 5 defects per 1000 lines of code changed.)

This predictor was useful to determine the effort for testing. (We were, uh, using a technique other than Agile Development.)

But the point is not about development methods. The point is that successful analytics require resources and time. They are often experimental.

Of all the items you need, an open mind is the most important. Once you have the analysis, once you have the result, how do you choose to use it? Do you believe the surprising result? Are you willing to change your processes in light of the analysis? Or do you look for reports than confirm your preconceived beliefs, and keep your current processes in place?

Monday, January 21, 2013

What is a PC?

It's a simple question -- "what is a PC?" -- yet the answer is complicated.

If we use Mr. Peabody's Wayback machine to travel to September 1981, the answer is simple. A "PC" (that is, a personal computer) is an IBM model 5150 with it's gray cover, detached keyboard (with 83 keys), and either an IBM Color Display (5153) or an IBM Monochrome Display (5151). It has an Intel 8088 processor, probably one or two floppy disk units, and a video adapter card.

At that time, that was a PC. Any other equipment was not. The PC name was strongly associated with IBM.

Over time, the concept of "PC" expanded. IBM introduced the IBM PC XT (model 5160), which meant that there were *two* models of IBM PC.

IBM introduced adapters for memory and ports. Other vendors did also. Compaq introduced their portable PC, fighting (and eventually winning) the battle for a compatible BIOS. Hercules made a video adapter that displayed graphics on monochrome displays (the IBM monochrome display adapter displayed only text).

In 1984 IBM introduced the IBM PC AT which used the Intel 80286 processor. Now there were three types of PCs from IBM, some with different processors, and bunches from other vendors. Some had more memory, some had different adapters. IBM introduced the Enhanced Graphics Adapter (EGA) with the IBM PC AT.

Through all of these changes, the two constants for PCs were this: they ran PC-DOS (or MS-DOS), and they ran Lotus 1-2-3. The operating system and that one application defined "PC". If the device ran PC-DOS and Lotus 1-2-3, it was a PC. If it did not, it was not. (And even this definition was not quite true, since several computers ran MS-DOS and special versions of Lotus 1-2-3, but were never considered to be "PC"s. The Zenith Z-100, for example.)

Moving forward to the early 1990s, our definition of PCs changed. It was no longer sufficient to run PC-DOS and Lotus 1-2-3. Instead, the criteria changed to Windows and Microsoft Office. Those were the defining characteristics of a PC. (Even in the late 1990s, when Compaq and Microsoft built the "Pocket PC", the device was considered a PC.)

Today, when we use the term "PC", we think of a set of devices. These include desktop computers, laptop computers, virtual computers running on servers, and now, with the Microsoft Surface, tablets. The operating system has expanded to include Linux (but not Mac OSX), and there is no definitive application. We use the phrases "Windows PC" and "Linux PC". Windows PCs must run Microsoft Windows and Microsoft Office, but a Linux PC needs only a version of Linux.

We have the puzzle of an Apple MacBook running Linux -- do we call this a PC? I am tending to think not. Apple's advertising and branding has been strong.

The one characteristic is that all of these devices require the user to be an administrator. The user must install new software, ensure updates are installed, and diagnose problems. This action separates a PC from a tablet. Tablets do not require the user to "install" software -- beyond selecting the software from a menu. Tablets do not require the user to be an administrator. Updates are applied automatically, or perhaps after a prompt. Network adapters do not need to be configured.

Let's take the dividing line between PCs and tablets as administration. Some might call it "ease of use".

Yet even this definition is less than clear. Apple's OSX is better at installing applications: just drag the install package to the "Applications" folder. Linux has made improvements too, with Ubuntu's "Software Center" that lets one pick an application and install it. Microsoft's Windows RT is quite close to Apple's iOS for iPhones and iPads, which are clearly not PCs.

Despite the lack of a bright line in devices and implementations, I believe that we will look back and consider PCs to require administration, and non-PCs (tablets, smartphones, etc.) to allow use without the administrator role.

So that's my answer: If you need an administrator, it's a PC. If you don't, then it isn't.

Maybe the answer isn't so complicated.


Thursday, January 17, 2013

Self-service does not mean customers are servants

The drive for efficiency leads companies to eliminate employees. Sometimes they do this under the guise of "self-service" machines or kiosks.

For example, banks eliminated tellers by using ATMs.

Now grocery stores are eliminating checkout clerks with self-service checkout kiosks. These are small, unmanned stations at which a customer can scan and bag their purchases without the assistance of a store employee. It is a way for grocery stores to reduce work -- at least from their point of view. In effect, the store has not reduced work but transferred it to the customer. The work is "off the books" as far as the corporate accountants are concerned -- but not to the customers.

Perhaps it is beneficial to review the actions that occur at a grocery store:

  • A customer enters the store
  • The customer travels through the store and selects items from shelves, placing them in a cart
  • The customer brings the items to the checkout counter
  • The customer (or sometimes the checkout clerk) removes the items from the cart
  • The checkout clerk reviews each item and records the price
  • The checkout clerk informs the customer of the total
  • The customer pays for the items
  • The customer (or sometimes the checkout clerk, or the bagger) puts the items back into the cart
  • The customer leaves with the purchased items

There is a lot of "adding to the cart" and "taking out of the cart". More than is necessary, perhaps.

At the manned counter, especially before the days of bar codes and on-line inventory systems, a customer had to take all of their items and let the checkout clerk hold them, one at a time. This was necessary for the clerk to calculate the proper payment amount.

Over time, the checkout process has become more efficient. Instead of manually tabulating the result, a register was used to perform the calculations. Instead of handing items to the clerk one at a time, a moving belt was introduced, allowing one customer to put items in place while the previous customer paid. Bar codes eliminated the need for clerks to hunt for and read a price tag, and eliminated the need for the clerk to key-punch the amount.

But despite all improvements, the general process of "items from shelf to cart", "from cart to clerk", and "from clerk to cart (again)" remain. The self-service checkout kiosks keep these steps; the change is that the customer acts also as the checkout clerk.

Perhaps there is a simpler process. Why should the customer remove the items from the cart only to put them back into the cart (albeit this time in bags)?

The answer to that question is, of course, so that the register can tally the cost of the items. Implied in that answer is the assumption that the checkout kiosk must be the item to tally the items.

But must the kiosk be the tally-er?

Suppose we let the cart perform the tallying. As a customer adds an item to a cart, the cart could record the event, look up the price, and display a running tally for the customer. The cart could even have a slot for payment cards or NFC equipment for cell phone payments.

With such an arrangement, the customer could simply walk around the store, select items, and when finished swipe a credit card and leave the store. There would be no un-pack/re-pack step.

This idea is requires work:

  • We need some way for carts to identify that items have been placed inside them. (And not simply nearby, perhaps in an adjacent cart.)
  • Carts will also need a way to identify when an item has been removed
  • Carts must have some connection (perhaps wi-fi) to the store's point-of-sale control system, to look up prices
  • Some items are sold by weight (deli items and some fruits and vegetables) and the cart would have to weigh the item
This idea is not perfect:
  • All of these features make carts more expensive
  • By constantly displaying the total, "smart carts" may discourage purchases

Yet there are advantages:

  • Store gain floor space, as they do not need checkout kiosks
  • There are no lines for checkout (not even for the self-service kiosks, which we got rid of in the previous item)
  • As a customer selects an item, smart carts can recommend other items (or perhaps offer coupons) in real time

So maybe we can look at self-service in a new light. Instead of pushing the work onto the customer, perhaps we can eliminate the work completely. Banks eliminated tellers by using ATMs, but they eliminated many more positions by using a different technology: credit cards.

Credit cards made it possible for people to purchase without checks or cash. After their introduction, banks saw fewer people withdrawing cash for purchases (fewer people at teller windows) and fewer checks for purchases (fewer checks to be processed).

And the ability to use a credit card to purchase items was something that made the internet and web a useful thing for many people. And drove more business.

I think that the successful technologies will be the ones that:

  • Eliminate tasks, or greatly reduce their effort
  • Enable people to do new things

The technologies that turn customers into servants? Those, I think, will have a short life.

Thursday, January 10, 2013

If tablets don't replace PCs, what will they do?

In my previous post I argued that tablets would not replace PCs. That position leads to an obvious question: If we keep PCs, then what will we do with tablets?

In the office, I can see tablets replacing laptops for some tasks. Instead of dragging a laptop to a meeting and fighting for power and network connections, people can easily carry tablets. The computational tasks at meetings are presentations, light note-taking, reading e-mail, and coordinating calendars. These tasks can be easily handled with tablets.

Some folks have floated the idea of eliminating the projector and displaying the presentation on attendee's tablets. (That could also help folks attending the meeting at remote locations.)

In the home, I can see tablets taking some of the tasks of PCs and laptops. The chief task: movies. I suspect music will be handled by phones, but movies need the larger screens of tablets.

Also in the home, games (at least the low-end games like "Angry Birds") will probably migrate to tablets, again because of the screen size. Books, magazines, and newspapers will be on tablets. Casual web browsing: shopping, news sites, photo management (but not heavy photo manipulation like Photoshop or GIMP).

Tablets will be less of a computing device and more of a personal assistant.

So maybe tablets do replace something. Maybe tablets replace not the PC but the PDA (the Personal Digital Assistant). Maybe tablets replace... the Palm Pilot.

Wednesday, January 9, 2013

Tablets will not replace PCs -- at least not as you think they might

Tablets have seen quite a bit of popularity, and some people now ask: When will tablets replace PCs?

To which I answer: They won't, at least not in the way most folks think of replacing PCs.

The issue of replacement is subtle.

To examine the subtleties, let's review a previous revolution that saw one form of technology replaced by another.

Did PCs replace mainframes? In one sense, they have, yet in another they have not. PCs are the most common form of computing, used in businesses, schools, and homes. Yet mainframes remain, processing millions of transactions each day.

When microcomputers were first introduced, one could argue that PCs had replaced mainframes as the dominant form of computing. Even before the IBM PC, in the days of the Apple II and the Radio Shack TRS-80, there were more microcomputers than mainframes. (I don't have numbers, but I am confident that several tens of thousands of microcomputers were sold. I'm also confident that there were fewer than several tens of thousands of mainframes at the time.)

But sheer numbers is not an accurate measure. For one thing, a mainframe serves multiple users and a PC serves one. So maybe PCs replaced mainframes when the number of PC users exceeded the number of mainframe users.

The measure of users is not particularly clear, either. Many users of microcomputers were hobbyists, not serious day-in-day-out users. And certainly mainframes processed more business transactions than the pre-internet PCs.

Maybe we should measure not the number of devices or the number of users, but instead the conversations about the devices. When did people talk about PCs more than mainframes? We cannot count tweets or blog posts (neither had been invented) but we can look at magazines and publications. Prior to the IBM PC, most technology magazines discussed mainframes (or minicomputers) and ignored the microcomputer systems. There were some microcomputer-specific magazines, the most popular one being BYTE.

The introduction of the IBM PC generated a number of magazines. At this point the magazine content shifted towards PCs.

There are other measures: number of want ads, number of conferences and expos, number of attendees at conferences and expos. We could argue for quite some time about the appropriate metric.

But I think that the notion of one generation of computing technology replacing another is a slippery one, and perhaps a false one. I'm not sure that one technology really replaces an earlier one.

Consider: We have advanced from the mainframe era to the PC era, and then to the networked PC era, and then to the internet era, and now we are venturing into the mobile/cloud era. Each successive "wave" of technology has not replaced the previous "wave", but instead expanded the IT sphere. PCs did not replace mainframes; they replaced typewriters and dedicated word-processing systems and expanded the computational realm with spreadsheets. The web did not replace PCs, it expanded the computation realm with on-line transactions -- many of which are handled with back-end systems running on... mainframes!

A new technology expands our horizons.

Tablets will expand our computing realm. They will enable us to do new things. Some tasks of today that are handled by PCs will be handled by tablets, but not all of them.

So I don't see tablets replacing PCs in the coming year. Or the next year. Or the year after.

But if you insist on a metric, some measurement to clearly define the shift from PC to tablet, I suggest that we look at not numbers or users or apps but conversations. The number people who talk about tablets, compared to the number of people who talk about PCs. I would include blogs, tweets, web articles, advertisements, podcasts, and books as part of the conversation. You may want to include a few more.

Whatever you pick, look at them, take some measurements, and mark when the lines cross.

And then blog or tweet about it.

Sunday, January 6, 2013

The revenge of Prolog

I picked up a copy of Clocksin and Mellish, the "Programming in Prolog" text from 1981. (Actually, it's the second edition from 1984.)

This tome gave me a great deal of difficulty back in the late 1980s. For some reason, I just could not "get" the idea of the Prolog language.

Reading it now, however, is another story. Now I "get" the idea of Prolog. A few years of experience can have a large effect on one's comprehension of "new" technology.

What strikes me about Prolog is that it is not really a programming language. At least, not in the procedural sense. (And it isn't. Clocksin and Mellish state this up front, in the preface to the first edition.)

If Prolog isn't a programming language, then what is it? As I see it, Prolog is a language for interrogating databases. Not SQL databases with tables and rows and columns, but databases that hold bits of information that Clocksin and Mellish call "facts".

For example, one can define a set of facts:

male(albert).
male(edward).
female(alice).
female(victoria).
parents(edward, victoria, albert).
parents(alice, victoria, albert).

Then define a relationship:

sister_of(X,Y) :- female(X), parents(X, M, F), parents(Y, M, F).

And then one can run a query:

sister_of(alice, edward).

Prolog will answer "yes", working out that Alice is the sister of Edward.

Today, we would call this database of facts a NoSQL database. And that is what the Prolog database is. It is a NoSQL database, that is, a collection of data that does not conform (necessarily) to the strict schema of a relational database. The Prolog database is less capable than modern NoSQL databases: it is limited to text items, numeric items, and aggregations of items. Modern NoSQL databases can hold these and more: pictures, audio files, and all sorts of things.

Finding an early version of the NoSQL database concept in the Prolog language is a pleasant surprise. For me, it validates the notion of NoSQL databases. And it validates the idea of Prolog. Great minds think alike, and similar solutions to problems, separated by time, confirm the value of the solution.

Friday, January 4, 2013

Pick a cloud... any cloud

Lots of folks have advice for cloud projects. Some folks have offered a traditional bit of advice: "Pick a reliable vendor who will stay in the industry. You want a vendor for a long-term relationship."

Cloud offerings, even in early 2013, are diverse and unsettled. Cloud services and infrastructures are growing and being refined. New vendors are entering the market. (I just read about GE entering the cloud services market.)

Cloud technology, if I may use the phrase, is up in the air. It is changing, as I write this. There is no clear leader in the field. Different vendors offer different types of cloud services, some compatible and some not. The offerings of cloud services remind me of the days before the IBM PC, when many vendors offered different microcomputer systems (Apple, Radio Shack, Commodore, North Star, Sol, etc.).

The problem with picking a vendor that will stay in the field is that no one knows which vendors will stay, or which vendors will keep their technology. One can argue that Amazon.com is the market leader, and one might pick them in the belief that they will keep their lead. Or one could pick Microsoft and their Azure platform, in the belief that Microsoft will endure as a company. These are good guesses, but in the end they are guesses. There is no guarantee that these companies will remain in the cloud services market, or that they will keep their current set of offerings.


Here's my advice:

Become familiar with cloud technologies. You don't have to train all of your staff or convert all of your systems, but develop enough knowledge to evaluate the different platforms and their suitability to your business.

For an established company, develop some small projects with cloud technology. Instead of converting your major systems to cloud, convert some minor ones. (Or develop some new products.) Assign a portion of your staff to these pilot projects with the technology. (Or better yet, a few different technologies.)

For a start-up, the plan is different. Start-ups have no legacy systems, and no risk of offending existing customers. On the other hand, they have very few (possibly one) product. If you're developing your one and only application in the cloud, you want to get it right. My recommendation is to try a few cloud systems, commit to one, and develop with an eye to migrating to another system. Avoid getting locked in to a single vendor.

Actually, avoiding vendor lock-in is a good idea for anyone using cloud services, start-up or established. Building systems this way is difficult and more expensive. It is a hedge against the vendor going out of business (or out of the cloud business). As a hedge, it is an expense that must be evaluated carefully by the managers of the company.

But my big advice is to avoid picking the "winning" cloud services provider. The market is immature and too volatile. No standard has emerged, and any vendor can fail.

Cloud services use cheap, easily configured servers (usually virtualized, but that's not important here) that can be quickly replaced in real-time should any fail at any time. A little bit of that thinking applied to vendors (any can fail at any time) may be a good guide when designing a cloud-based system.

Wednesday, January 2, 2013

Windows is the de facto standard, and that's a bad thing

A simple phrase can conjure up interesting memories, and such is the case with the phrase "de facto standard".

From its 3.1 release, Windows has been the standard. It was a de facto standard -- it was not adopted by a standards body or codified in law -- but no one called it that -- they simple said "Windows".

The last product in tech that had the attribute "de facto standard" was the predecessor to PC-DOS: a small operating system known as CP/M. In the late 1970s and early 1980s, prior to the introduction of the IBM PC and PC-DOS, CP/M was the most popular operating system. It did not have the near-universal acceptance of Windows; operating systems like Radio Shack's TRS-DOS and Apple's DOS were major contenders, and there were a bunch of minor competitors. CP/M had a majority of the market but not an overwhelming majority, and people called CP/M "the de facto standard". *

The "de facto" label was, for CP/M, no guarantee of success. With the introduction of the IBM PC, CP/M was quickly replaced by PC-DOS. It became The Way Everyone Uses Computers. No one used the phrase "de facto standard". They simply called it "DOS", and there was no discussion or consideration of alternatives (except for a few eccentric Macintosh users.)

Later, PC-DOS was replaced by Windows. With release 3.1, Windows became The Way Everyone Uses Computers. No one used the term "de facto standard" for Windows, either. (The same group of eccentric Macintosh users were present, and folks mostly ignored them.)

This past year has seen alternate operating systems rise to challenge the dominance of Windows. Mac OSX has made inroads for desktops and laptops. Linux has taken some of the server market. For phones, iOS and Android far surpass Windows.


Now, people are calling Windows the de facto standard. I think this is a bad thing for Windows. It is an admission of competition, an acknowledgement of fallibility. The presence of the term means that people consider alternate operating systems a viable threat to Windows. The pervasive group-think has shifted from "Windows as only operating system" to "Windows is our operating system and we want to keep it that way". Windows is no longer The Way Everyone Uses Computers; now it is The Way We Use Computers.

Perhaps I am reading too much into the phrase "de facto standard". Perhaps the memory of IBM and Microsoft taking away our cherished microcomputer independence still stings. Perhaps nothing has changed in the mindset of programmers, consumers, and businesses.

Or perhaps people are ready to move to a new computing platform.


* The popularity of different operating systems in the pre-PC age is difficult to measure, and arguments can be made that specific operating systems were the most popular. Operating systems were sometimes sold separately and sometimes bundled with hardware, and the fans of Commodore C64 computers have a good case for their Microsoft BASIC as the most popular operating system. I have seen the term "de facto" applied only to CP/M and not to any competitors.