Tuesday, May 16, 2017

Law comes to computing's Wild West

I see Windows 10 S as the future of Windows. The model of "software only through a store" works for phones and tablets, provides better security, and reduces administrative work. It is "good enough" for corporate users and consumers, and those two groups drive the market. ("Good enough" if the right applications are available in the store, that is.)

But.

The introduction of Windows 10 S is a step in the closing the frontier we fondly think of as "personal computing".

This "closing of the frontier" has been happening for some time.

The IBM PC was open to tinkerers, in both hardware and, to some extent, software. On the hardware side, the IBM PC was designed for adapter cards, and designed to allow individuals to open the case and insert them. IBM released technical specifications which allowed other manufacturers to create their own cards. It was a smart move by IBM, and helped ensure the success of the PC.

On the software side, there were three operating systems available for the IBM PC: DOS, CP/M-86, and UCSD p-System. These were less restrictive than today's operating systems, with no notion of "user" or "administrator", no notion of "user privileges" or "user account". The operating system (such as it was) managed files on disk and loaded programs into memory when requested.

It was a time akin to the "wild west" with no controls on users. Any user could attach any device or install any program. (Getting everything to work was not always easy, and not always possible, but users could try.)

How has the PC realm become closed?

First, let me say that it is not totally closed. Users still have a great deal of freedom, especially on PCs they purchase for themselves (as opposed to corporate-issued PCs).

But the freedom to do anything meant that users could break things, easily, and lose data and disable programs. It also meant that ill-meaning individuals could write virus programs and cause problems. Over time, we (as an industry and group of users) decided that restrictions were necessary.

One of the first things corporate support groups did, when preparing a new PC, was to remove the 'FORMAT' program. (Or rename it.) It was considered too "dangerous" for a non-technical user.

The next set of restrictions came with Windows NT. It provided the notion of 'user accounts' and logins and passwords -- and enforced them. Windows NT also provided the notion of 'user privileges' which meant that some users could adjust settings for the operating system and others could not. Some users could install software, and others could not. Some users could... you get the idea.

Restrictions have not been limited to software.

UEFI replaced the BIOS, and was not "flashable" as many BIOSes had been.

Smaller computers (laptops and tablets) are generally not openable. The IBM PC provided access to memory, adapter cards, DIP switches (remember those?), and the power supply. Today, most laptops allow access to memory chips... and little else. (DIP switches have disappeared from PCs entirely, and no one misses them.)

Which brings us to Windows 10 S.

Windows 10 S is a move to close the environment a little more. It makes a PC more like a phone, with an official "store" where one must buy software. You cannot install just any software. You cannot write your own software and install it.

The trend has been one of a gradual increase in "law" in our wild west. As in history, the introduction of these "laws" has meant the curtailment of individuals' freedoms. You cannot re-format your phone, at least not accidentally, and not to a blank disk. (Yes, you can reset your phone, which isn't quite the same thing.)

Another way to look at the situation is a change in the technology. We have shifted from the original PCs that required hardware and software configuration to meet the needs of the user (an individual or a larger entity). Instead of the early (incomplete) computers, we have well-defined and fully-functional computers that provide limited configuration capabilities. This is accepted because the changes that we want to make are within the "locked down" configuration of the PC. The vast majority of users don't need to set parameters for the COM port, or add memory, or install new versions of Lotus 1-2-3. In corporate settings, users run the assigned software and choose a photo for our desktop background; at home we install Microsoft Office and let it run as it comes "out of the box".

The only folks who want to make changes are either corporate sysadmins or individual tinkerers. And there are very few tinkerers, compared to the other users.

For the tinkerers and organizations that need "plain old Windows", it is still available. Windows 10-without-S works as it has before. You can install anything. You can adjust anything. Provided you have the privileges to do so.

I see Windows S as an experiment, testing the acceptance of such a change in the market. I expect a lot of noise from protesters, but the interesting aspect will be behavior. Will the price of Windows 10 S affect acceptance? Possibly. Windows 10 S is not sold separately -- only preloaded onto computers. So look for the purchasing behavior of low-cost Windows 10 S devices.

In the long term, I expect Windows 10 S or a derivative to become the popular version of Windows. Corporations and governments will install it for employees, and keep the non-S version of Windows for those applications that cannot run under Windows 10 S. Those instances of Windows (the non-S instances) will most likely be run on virtual machines in data centers, not on individuals' desks.

But those instances of "non-S Windows" will become few, and eventually fade into history, along with PC-DOS and Windows 95. Ans while a few die-hard enthusiasts will keep them running, the world will switch to a more protected, a more secure, and a less wild-west version of Windows.

Monday, May 8, 2017

Eventual Consistency

The NoSQL database technology has introduced a new term: eventual consistency.

Unlike traditional relational databases which promise atomicity, completeness, and durability, NoSQL databases promise that updates will be consistent at some point in the (near) future. Just not right now.

For some folks, this is bad. "Eventual" consistency is not as good as "right now" consistency. Worse, it means that for some amount of time, the system is in an inconsistent state, and inconsistencies make managers nervous.

But we've had systems with internal inconsistencies for some time. They exist today.

One example is from a large payroll-processing company.  They have been in business for decades and have a good reputation. Surely they wouldn't risk their (lucrative) business on something like inconsistency? Yet they do.

Their system consists of two subsystems: a large, legacy mainframe application and a web front-end. The mainframe system processes transactions, which includes sending information to the ACH network. (The ACH network feeds information to individual banks, which is how your paycheck is handled with direct deposit.)

Their web system interfaces to the processing system. It allows employees to sign in and view their paychecks, present and past. It is a separate system, mostly due to the differences in technologies.

Both systems run on schedules, with certain jobs running every night and some running during the day.
Inconsistencies arise when the payroll job runs on Friday. The payroll-processing system runs and sends money to the ACH network, but the web system doesn't get the update until Monday morning. Money appears in the employee's bank account, but the web system knows nothing about the transaction. That's the inconsistency, at least over the weekend and until Monday morning. Once the web system is updated, both systems are "in sync" and consistent.

This example shows us some things about inconsistencies.
  • Inconsistencies occur between systems. Each subsystem is consistent to itself.
  • Inconsistencies are designed. Our example payroll system could be modified to update the web subsystem every time a payroll job is run. The system designers chose to use a different solution, and they may have had good reasons.
  • Inconsistencies exist only when one is in a position to view them. We see the inconsistency in the payroll-processing system because we are outside of it, and we get data from both the core processing system and the web subsystem.

Eventual consistency is also a design. It also exists between subsystems (or between instances of a subsystem in a cloud system).

Eventual consistency is not necessarily a bad thing. It's not necessarily a good thing. It is an aspect of a system, a design choice between trade-offs. And we've had it for quite some time.

Monday, May 1, 2017

That old clunky system -- the smart phone

Mainframes probably have first claim on the title of "that old large hard-to-use system".

Minicomputers were smaller, easier to use, less expensive, and less fussy. Instead of an entire room, they could fit in the corner of an office. Instead of special power lines, they could use standard AC power.

Of course, it was the minicomputer users who thought that mainframes were old, big, and clunky. Why would anyone want that old, large, clunky thing when they could have a new, small, cool minicomputer?

We saw the same effect with microcomputers. PCs were smaller, easier to use, less expensive, and less fussy than minicomputers.

And of course, it was the PC users who thought that minicomputers (and mainframes) were old, big, and clunky. Why would anyone want that old, large, clunky thing when they could have a new, small, cool PC?

Here's the pattern: A technology gets established and adopted by a large number of people. The people who run the hardware devote time and energy to learning how to operate it. They read the manuals (or web pages), they try things, they talk with other administrators. They become experts, or at least comfortable with it.

The second phase of the pattern is this: A new technology comes along, one that does similar (although often not identical) work as the previous technology. Many times, the new technology does a few old things and lots of new things. Minicomputers could handle data-oriented applications like accounting, but were better at data input and reporting. PCs could handle input and reporting, but were really good at word processing and spreadsheets.

The people who adopt the later technology look back, often in disdain, at the older technology that doesn't do all of the cool new things. (And too often, the new-tech folks look down on the old-tech folks.)

Let's move forward in time. From mainframes to minicomputers, from minicomputers to desktop PCs, from desktop PCs to laptop PCs, from classic laptop PCs to MacBook Air-like laptops. Each transition has the opportunity to look back and ask "why would anyone want that?", with "that" being the previous cool new thing.

Of course, such effects are not limited to computers. There were similar feelings with the automobile, typewriters (and then electric typewriters), slide rules and pocket calculators, and lots more.

We can imagine that one day our current tech will be considered "that old thing". Not just ultralight laptops, but smartphones and tablets too. But what will the cool new thing be?

I'm not sure.

I suspect that it won't be a watch. We've had smartwatches for a while now, and they remain a novelty.

Ditto for smart glasses and virtual reality displays.

Augmented reality displays such as Microsoft's Halo, show promise, but also remain a diversion.

What the next big thing needs is a killer app. For desktop PCs, the killer app was the spreadsheet. For smartphones, the killer app was GPS and maps (and possibly Facebook and games). It wasn't the PC or the phone that people wanted, it was the spreadsheet and the ability to drive without a paper map.

Maybe we've been going about this search for the next big thing in the wrong way. Instead of searching for the device, we should search for the killer app. Find the popular use first, and then you will find the device.

Sunday, April 23, 2017

Two successes from Microsoft

One success is the Surface tablet. Recent articles state that Microsoft is losing, because other manufacturers are producing devices that surpass Microsoft's Surface tablet.

I have a different view. I consider the Surface tablet a success. It's a success because it keeps Microsoft (and Windows) in the market. Microsoft introduced the Surface as response to Apple's iPad tablet. Without the Surface, Microsoft would have offerings for desktop PCs, laptop PCs, and phones. The Surface keeps Microsoft in the market, and keeps customers loyal to Microsoft.

The second success is the CloudBook. Last week saw a leaked document that outlined specifications for a device called a "CloudBook". This appears to be a response to Google's ChromeBook devices, which are lightweight laptops that run ChomeOS and the Chrome browser.

Calling the CloudBook a success is a bit premature. The official CloudBook devices have yet to be released, so we don't know how they will perform and how customers will receive them. (Acer has a laptop that they call a "CloudBook", which is probably a close approximation of the future CloudBooks.)

Yet I believe that CloudBooks will be a success for Microsoft. They keep Microsoft in the market. I think that many businesses will use CloudBooks. They are less expensive than typical laptops, they are easier to administer, and being browser-focused their apps store data in the cloud, not locally. Storing data in the cloud is more secure and eliminates the loss of data due to the loss of a laptop.

Tuesday, April 18, 2017

Microsoft and programming languages

Should Microsoft develop programming languages? Or interpreters and compilers? Should they continue to develop C# (and F#)? Should they continue to develop the C# compiler?

A world in which Microsoft does not develop programming languages would indeed be different. Microsoft's history is full of programming languages and implementations. They started with an interpreter for BASIC. They quickly followed that with a Macro-assembler, a FORTRAN compiler, a COBOL compiler, and even a BASIC compiler (to compete with Digital Research's CBASIC compiler.) When the C programming language became popular, Microsoft acquired a C compiler and, after much rework over the years, expanded it into the Visual Studio we know today. (Some of Microsoft's offerings were products purchased from other sources, but once in the Microsoft fold they received a lot of changes.)

The compilers, interpreters, editors, and debuggers have all served Microsoft well. But Microsoft treated them as tools of its empire, supporting them and enhancing them when such support and enhancements grew Microsoft, and discarding them when they did not aid Microsoft. Examples of discontinued languages include their Pascal compiler, Visual Basic, and the short-lived Visual J#.

Today, Microsoft supports C#, F#, and VB.NET.

I've been thinking about these languages. Microsoft created C# during their "empire" phase, when Microsoft tried to provide everything for everyone. They had to compete with Java, and C# was their entry. VB.NET was necessary to offer a path from Visual Basic into .NET. F# is the most recent addition, an expedition into functional programming.

All of these languages provided a path that lead into (but not out of) the Microsoft world. To use Visual Basic, you had to run Windows. To program in C#, you had to run Windows.

Today, Microsoft is agnostic about operating systems, and languages. Azure supports Windows and Linux. Visual Studio works with PHP, JavaScript, Python, and Ruby, among others. Microsoft has opened the C# compiler and .NET framework to non-Microsoft platforms.

Microsoft is no longer using programming languages as a means to drive people to Windows.

That is a significant change. A consequence of that change is a reduction in the importance of programming languages. It may make sense for Microsoft to let other people develop programming languages. Perhaps Microsoft's best strategy is to provide a superior environment for programming and the development of languages.

Microsoft is not the first company to make this transition. IBM did the same with its languages. FORTRAN, PL/I, APL, SQL, and RPG were all invented by IBM and proprietary, usable only on IBM equipment. Today, IBM provides services and doesn't need private programming languages to sell hardware.

Microsoft cannot simply drop C#. What would make sense would be a gradual, planned, transfer to another organization. Look for actions that continue in the direction of open source for C# and .NET.

Thursday, April 13, 2017

Slack, efficiency, and choice

Slack is the opposite of efficiency. Slack is excess capacity, unused space, extra money. Slack is waste and therefore considered bad.

Yet things are not that simple. Yes, slack is excess capacity and unused space. And yes, slack can be viewed as waste. But slack is not entirely bad. Slack has value.

Consider the recent (infamous) overbooking event on United Airlines. One passenger was forcibly removed from a flight to make room for a United crew member (for another flight, the crew member was not working the flight of the incident). United had a fully-booked flight from Chicago to Louisville and needed to move a flight crew from Chicago to Louisville. They asked for volunteers to take other flights; three people took them up on the offer, leaving one seat as an "involuntary re-accommodation".

I won't go into the legal and moral issues of this incident. Instead, I will look at slack.

- The flight had no slack passenger capacity. It was fully booked. (That's usually a good thing for the airline, as it means maximum revenue.)

- The crew had to move from Chicago to Louisville, to start their next assigned flight. It had to be that crew; there was no slack (no extra crew) in Louisville. I assume that there was no other crew in the region that could fill in for the assigned crew. (Keep in mind that crews are regulated as to how much time they can spend working, by union contracts and federal law. This limits the ability of an airline to swap crews like sacks of flour.)

In a perfectly predictable world, we can design, build, and operate systems with no slack. But the world is not perfectly predictable. The world surprises us, and slack helps us cope with those surprises. Extra processing capacity is useful when demand spikes. Extra money is useful for many events, from car crashes to broken water heaters to layoffs.

Slack has value. It buffers us from harsh consequences.

United ran their system with little slack, was subjected to demands greater than expected, and suffered consequences. But this is not really about United or airlines or booking systems. This is about project management, system design, budgeting, and just about any other human activity.

I'm not recommending that you build slack into your systems. I'm not insisting that airlines always leave a few empty seats on each flight.

I'm recommending that you consider slack, and that you make a conscious choice about it. Slack has a cost. It also has benefits. Which has the greater value for you depends on your situation. But don't strive to eliminate slack without thought.

Examine. Evaluate. Think. And then decide.

Sunday, April 9, 2017

Looking inwards and outwards

It's easy to categorize languages. Compiled versus interpreted. Static typing versus dynamic. Strongly typed versus weakly typed. Strict syntax versus liberal. Procedural. Object-oriented. Functional. Languages we like; languages we dislike.

One mechanism I have not seen is the mechanism for assuring quality. It's obscurity is not a surprise -- the mechanisms are more a function of the community, not the language itself.

Quality assurance tends to fall into two categories: syntax checking and unit tests. Both aim to verify that programs perform as expected. The former relies on features of the language, the latter relies on tests that are external to the language (or at least external to the compiler or interpreter).

Interestingly, there is a correlation between execution type (compiled or interpreted) and assurance type (language features or tests). Compiled languages (C, C++, C#) tend to rely on features of the language to ensure correctness. Interpreted languages (Perl, Python, Ruby) tend to rely on external tests.

That interpreted languages rely on external tests is not a surprise. The languages are designed for flexibility and do not have the concepts needed to verify the correctness of code. Ruby especially supports the ability to modify objects and classes at runtime, which means that static code analysis must be either extremely limited or extremely sophisticated.

That compiled languages (and the languages I mentioned are strongly and statically typed) rely on features of the language is also not a surprise. IDEs such as Visual Studio can leverage the typing of the language and analyze the code relatively easily.

We could use tests to verify the behavior of compiled code. Some projects do. But many do not, and I surmise from the behavior of most projects that it is easier to analyze the code than it is to build and run tests. That matches my experience. On some projects, I have refactored code (renaming classes or member variables) and checked in changes after recompiling and without running tests. In these cases, the syntax checking of the compiler is sufficient to ensure quality.

But I think that tests will win out in the end. My reasoning is: language features such as strong typing and static analysis are inward-looking. They verify that the code meets certain syntactic requirements.

Tests, when done right, look not at the code but at the requirements. Good tests are built on requirements, not code syntax. As such, tests are more aligned with the user's needs, and not the techniques used to build the code. Tests are more "in touch" with the actual needs of the system.

The syntax requirements of languages are inward looking. They verify that the code conforms to a set of rules. (This isn't bad, and at times I want C and C++ compilers to require indentation much like Python does.) But conforming to rules, while nice (and possibly necessary) is not sufficient.

Quality software requires looking inward and outward. Good code is easy to read (and easy to change). Good code also performs the necessary tasks, and it is tests -- and only tests -- that can verify that.