Sunday, May 21, 2017

Parallel processing on the horizon

Parallel processing has been with us for years. Or at least attempts at parallel processing.

Parallel processing has failed due the numerous challenges it faces. It requires special (usually expensive) hardware. Parallel processing on convention CPUs is simply processing items serially, because conventional CPUs can process only serially. (Multi-core processors address this problem to a small degree.) Parallel processing requires support in compilers and run-time libraries, and often new data structures. Most importantly, parallel processing requires tasks that are partitionable. The classic example of "nine women producing a baby in one month" highlights a task that is not partitionable, not divisible, into smaller tasks.

Cloud computing offers a new twist on parallel processing.

First, it offers multiple processors. Not just multiple cores, but true multiple processors -- as many as you would like.

Second, it offers these processors cheaply.

Cloud computing is a platform that can handle parallel processing -- in some areas. It has its problems.

First, creating new cloud processing systems is expensive in terms of time. A virtual machine must be instantiated, started, and given software to handle the task. Then, data must be shipped to the server. After processing, the result must be sent back, or forward to another processor. The time for all of these tasks is significant.

Second, we still have the problems of partitioning tasks and representing the data and operations in a program.

There is one area of development that I believe is ready to leverage parallel processing. That area is testing.

The typical testing effort for a project can have multiple levels: unit tests, component tests, system tests, end-to-end tests, you name it. But each level of testing follows the same general pattern:

  • Get a collection of tests, complete with input data and expected results
  • For each test
  • 1) Set up a test environment (program and data)
  • 2) Run the test
  • 3) Compare output to expected output
  • 4) Record the results
  • Summarize the results and report

In this process, the sequence of steps I've labelled 1 through 4 is repeated for each test. Traditional testing puts all of these tests on one computer, performing each test in sequence. Parallel testing can put each test on its own cloud-based processor, effectively running all tests at once.

Testing has a series of well-defined and partitionable tasks. Modern testing methods use automated tests, so a test can run locally or remotely (as long as it has access to everything it needs). Testing can be a drain on resources and time, requiring lots of requests to servers and lots of time to complete all tests.

Testing in the cloud, and in parallel, addresses these issues. It reduces the time for tests and improves the feedback to developers. Cloud processing is cheap -- at least cheaper than paying developers to wait for tests to run.

I think one the next "process improvements" for software development will be the use of cloud processing to run tests. Look for new services and changes to testing frameworks to support this new mode of testing.

Thursday, May 18, 2017

An echo of Wordstar

In 1979, Wordstar was the popular word processor of the time. It boasted "what you see is what you get" (WYSIWYG) because it would reformat text on the screen as you typed.

Wordstar had a problem on some computers. It would, under the right conditions, miss characters as you were typing. The problem was documented in an issue of "Personal Computing", comparing Wordstar to another program called "Electric Pencil". The cause was the automatic reformatting of text. (The reformatting took time, and that's when characters were lost. Wordstar was busy redrawing text and not paying attention to the keyboard.)

At the time, computers were primitive. They ran CP/M on an 8080 or Z-80 processor with at most 64K RAM. Some systems used interrupts to detect keystrokes but others simply polled the keyboard from time to time, and it was easy to miss a typed character.

So here we are in 2017. Modern PCs are better then the early microcomputers, or so we like to think. We have more memory. We have larger disks for storage. We have faster processors. They cost less. Better in every way.

So we like to think.

From a blog in 2017:
Of course, I always need to switch over to Emacs to get real work done.  IntelliJ doesn't like it when you type fast.  Its completions can't keep up and you wind up with half-identifiers everywhere. 
I'm flabbergasted.

(I must also note that I am not a user of IntelliJ, so I have not seen this behavior myself. I trust the report from the blogger.)

But getting back to being flabbergasted...

We have, in 2017, an application that cannot keep up with human typing?

We may have made less progress than we thought.

Tuesday, May 16, 2017

Law comes to computing's Wild West

I see Windows 10 S as the future of Windows. The model of "software only through a store" works for phones and tablets, provides better security, and reduces administrative work. It is "good enough" for corporate users and consumers, and those two groups drive the market. ("Good enough" if the right applications are available in the store, that is.)

But.

The introduction of Windows 10 S is a step in the closing the frontier we fondly think of as "personal computing".

This "closing of the frontier" has been happening for some time.

The IBM PC was open to tinkerers, in both hardware and, to some extent, software. On the hardware side, the IBM PC was designed for adapter cards, and designed to allow individuals to open the case and insert them. IBM released technical specifications which allowed other manufacturers to create their own cards. It was a smart move by IBM, and helped ensure the success of the PC.

On the software side, there were three operating systems available for the IBM PC: DOS, CP/M-86, and UCSD p-System. These were less restrictive than today's operating systems, with no notion of "user" or "administrator", no notion of "user privileges" or "user account". The operating system (such as it was) managed files on disk and loaded programs into memory when requested.

It was a time akin to the "wild west" with no controls on users. Any user could attach any device or install any program. (Getting everything to work was not always easy, and not always possible, but users could try.)

How has the PC realm become closed?

First, let me say that it is not totally closed. Users still have a great deal of freedom, especially on PCs they purchase for themselves (as opposed to corporate-issued PCs).

But the freedom to do anything meant that users could break things, easily, and lose data and disable programs. It also meant that ill-meaning individuals could write virus programs and cause problems. Over time, we (as an industry and group of users) decided that restrictions were necessary.

One of the first things corporate support groups did, when preparing a new PC, was to remove the 'FORMAT' program. (Or rename it.) It was considered too "dangerous" for a non-technical user.

The next set of restrictions came with Windows NT. It provided the notion of 'user accounts' and logins and passwords -- and enforced them. Windows NT also provided the notion of 'user privileges' which meant that some users could adjust settings for the operating system and others could not. Some users could install software, and others could not. Some users could... you get the idea.

Restrictions have not been limited to software.

UEFI replaced the BIOS, and was not "flashable" as many BIOSes had been.

Smaller computers (laptops and tablets) are generally not openable. The IBM PC provided access to memory, adapter cards, DIP switches (remember those?), and the power supply. Today, most laptops allow access to memory chips... and little else. (DIP switches have disappeared from PCs entirely, and no one misses them.)

Which brings us to Windows 10 S.

Windows 10 S is a move to close the environment a little more. It makes a PC more like a phone, with an official "store" where one must buy software. You cannot install just any software. You cannot write your own software and install it.

The trend has been one of a gradual increase in "law" in our wild west. As in history, the introduction of these "laws" has meant the curtailment of individuals' freedoms. You cannot re-format your phone, at least not accidentally, and not to a blank disk. (Yes, you can reset your phone, which isn't quite the same thing.)

Another way to look at the situation is a change in the technology. We have shifted from the original PCs that required hardware and software configuration to meet the needs of the user (an individual or a larger entity). Instead of the early (incomplete) computers, we have well-defined and fully-functional computers that provide limited configuration capabilities. This is accepted because the changes that we want to make are within the "locked down" configuration of the PC. The vast majority of users don't need to set parameters for the COM port, or add memory, or install new versions of Lotus 1-2-3. In corporate settings, users run the assigned software and choose a photo for our desktop background; at home we install Microsoft Office and let it run as it comes "out of the box".

The only folks who want to make changes are either corporate sysadmins or individual tinkerers. And there are very few tinkerers, compared to the other users.

For the tinkerers and organizations that need "plain old Windows", it is still available. Windows 10-without-S works as it has before. You can install anything. You can adjust anything. Provided you have the privileges to do so.

I see Windows S as an experiment, testing the acceptance of such a change in the market. I expect a lot of noise from protesters, but the interesting aspect will be behavior. Will the price of Windows 10 S affect acceptance? Possibly. Windows 10 S is not sold separately -- only preloaded onto computers. So look for the purchasing behavior of low-cost Windows 10 S devices.

In the long term, I expect Windows 10 S or a derivative to become the popular version of Windows. Corporations and governments will install it for employees, and keep the non-S version of Windows for those applications that cannot run under Windows 10 S. Those instances of Windows (the non-S instances) will most likely be run on virtual machines in data centers, not on individuals' desks.

But those instances of "non-S Windows" will become few, and eventually fade into history, along with PC-DOS and Windows 95. Ans while a few die-hard enthusiasts will keep them running, the world will switch to a more protected, a more secure, and a less wild-west version of Windows.

Monday, May 8, 2017

Eventual Consistency

The NoSQL database technology has introduced a new term: eventual consistency.

Unlike traditional relational databases which promise atomicity, completeness, and durability, NoSQL databases promise that updates will be consistent at some point in the (near) future. Just not right now.

For some folks, this is bad. "Eventual" consistency is not as good as "right now" consistency. Worse, it means that for some amount of time, the system is in an inconsistent state, and inconsistencies make managers nervous.

But we've had systems with internal inconsistencies for some time. They exist today.

One example is from a large payroll-processing company.  They have been in business for decades and have a good reputation. Surely they wouldn't risk their (lucrative) business on something like inconsistency? Yet they do.

Their system consists of two subsystems: a large, legacy mainframe application and a web front-end. The mainframe system processes transactions, which includes sending information to the ACH network. (The ACH network feeds information to individual banks, which is how your paycheck is handled with direct deposit.)

Their web system interfaces to the processing system. It allows employees to sign in and view their paychecks, present and past. It is a separate system, mostly due to the differences in technologies.

Both systems run on schedules, with certain jobs running every night and some running during the day.
Inconsistencies arise when the payroll job runs on Friday. The payroll-processing system runs and sends money to the ACH network, but the web system doesn't get the update until Monday morning. Money appears in the employee's bank account, but the web system knows nothing about the transaction. That's the inconsistency, at least over the weekend and until Monday morning. Once the web system is updated, both systems are "in sync" and consistent.

This example shows us some things about inconsistencies.
  • Inconsistencies occur between systems. Each subsystem is consistent to itself.
  • Inconsistencies are designed. Our example payroll system could be modified to update the web subsystem every time a payroll job is run. The system designers chose to use a different solution, and they may have had good reasons.
  • Inconsistencies exist only when one is in a position to view them. We see the inconsistency in the payroll-processing system because we are outside of it, and we get data from both the core processing system and the web subsystem.

Eventual consistency is also a design. It also exists between subsystems (or between instances of a subsystem in a cloud system).

Eventual consistency is not necessarily a bad thing. It's not necessarily a good thing. It is an aspect of a system, a design choice between trade-offs. And we've had it for quite some time.

Monday, May 1, 2017

That old clunky system -- the smart phone

Mainframes probably have first claim on the title of "that old large hard-to-use system".

Minicomputers were smaller, easier to use, less expensive, and less fussy. Instead of an entire room, they could fit in the corner of an office. Instead of special power lines, they could use standard AC power.

Of course, it was the minicomputer users who thought that mainframes were old, big, and clunky. Why would anyone want that old, large, clunky thing when they could have a new, small, cool minicomputer?

We saw the same effect with microcomputers. PCs were smaller, easier to use, less expensive, and less fussy than minicomputers.

And of course, it was the PC users who thought that minicomputers (and mainframes) were old, big, and clunky. Why would anyone want that old, large, clunky thing when they could have a new, small, cool PC?

Here's the pattern: A technology gets established and adopted by a large number of people. The people who run the hardware devote time and energy to learning how to operate it. They read the manuals (or web pages), they try things, they talk with other administrators. They become experts, or at least comfortable with it.

The second phase of the pattern is this: A new technology comes along, one that does similar (although often not identical) work as the previous technology. Many times, the new technology does a few old things and lots of new things. Minicomputers could handle data-oriented applications like accounting, but were better at data input and reporting. PCs could handle input and reporting, but were really good at word processing and spreadsheets.

The people who adopt the later technology look back, often in disdain, at the older technology that doesn't do all of the cool new things. (And too often, the new-tech folks look down on the old-tech folks.)

Let's move forward in time. From mainframes to minicomputers, from minicomputers to desktop PCs, from desktop PCs to laptop PCs, from classic laptop PCs to MacBook Air-like laptops. Each transition has the opportunity to look back and ask "why would anyone want that?", with "that" being the previous cool new thing.

Of course, such effects are not limited to computers. There were similar feelings with the automobile, typewriters (and then electric typewriters), slide rules and pocket calculators, and lots more.

We can imagine that one day our current tech will be considered "that old thing". Not just ultralight laptops, but smartphones and tablets too. But what will the cool new thing be?

I'm not sure.

I suspect that it won't be a watch. We've had smartwatches for a while now, and they remain a novelty.

Ditto for smart glasses and virtual reality displays.

Augmented reality displays such as Microsoft's Halo, show promise, but also remain a diversion.

What the next big thing needs is a killer app. For desktop PCs, the killer app was the spreadsheet. For smartphones, the killer app was GPS and maps (and possibly Facebook and games). It wasn't the PC or the phone that people wanted, it was the spreadsheet and the ability to drive without a paper map.

Maybe we've been going about this search for the next big thing in the wrong way. Instead of searching for the device, we should search for the killer app. Find the popular use first, and then you will find the device.

Sunday, April 23, 2017

Two successes from Microsoft

One success is the Surface tablet. Recent articles state that Microsoft is losing, because other manufacturers are producing devices that surpass Microsoft's Surface tablet.

I have a different view. I consider the Surface tablet a success. It's a success because it keeps Microsoft (and Windows) in the market. Microsoft introduced the Surface as response to Apple's iPad tablet. Without the Surface, Microsoft would have offerings for desktop PCs, laptop PCs, and phones. The Surface keeps Microsoft in the market, and keeps customers loyal to Microsoft.

The second success is the CloudBook. Last week saw a leaked document that outlined specifications for a device called a "CloudBook". This appears to be a response to Google's ChromeBook devices, which are lightweight laptops that run ChomeOS and the Chrome browser.

Calling the CloudBook a success is a bit premature. The official CloudBook devices have yet to be released, so we don't know how they will perform and how customers will receive them. (Acer has a laptop that they call a "CloudBook", which is probably a close approximation of the future CloudBooks.)

Yet I believe that CloudBooks will be a success for Microsoft. They keep Microsoft in the market. I think that many businesses will use CloudBooks. They are less expensive than typical laptops, they are easier to administer, and being browser-focused their apps store data in the cloud, not locally. Storing data in the cloud is more secure and eliminates the loss of data due to the loss of a laptop.

Tuesday, April 18, 2017

Microsoft and programming languages

Should Microsoft develop programming languages? Or interpreters and compilers? Should they continue to develop C# (and F#)? Should they continue to develop the C# compiler?

A world in which Microsoft does not develop programming languages would indeed be different. Microsoft's history is full of programming languages and implementations. They started with an interpreter for BASIC. They quickly followed that with a Macro-assembler, a FORTRAN compiler, a COBOL compiler, and even a BASIC compiler (to compete with Digital Research's CBASIC compiler.) When the C programming language became popular, Microsoft acquired a C compiler and, after much rework over the years, expanded it into the Visual Studio we know today. (Some of Microsoft's offerings were products purchased from other sources, but once in the Microsoft fold they received a lot of changes.)

The compilers, interpreters, editors, and debuggers have all served Microsoft well. But Microsoft treated them as tools of its empire, supporting them and enhancing them when such support and enhancements grew Microsoft, and discarding them when they did not aid Microsoft. Examples of discontinued languages include their Pascal compiler, Visual Basic, and the short-lived Visual J#.

Today, Microsoft supports C#, F#, and VB.NET.

I've been thinking about these languages. Microsoft created C# during their "empire" phase, when Microsoft tried to provide everything for everyone. They had to compete with Java, and C# was their entry. VB.NET was necessary to offer a path from Visual Basic into .NET. F# is the most recent addition, an expedition into functional programming.

All of these languages provided a path that lead into (but not out of) the Microsoft world. To use Visual Basic, you had to run Windows. To program in C#, you had to run Windows.

Today, Microsoft is agnostic about operating systems, and languages. Azure supports Windows and Linux. Visual Studio works with PHP, JavaScript, Python, and Ruby, among others. Microsoft has opened the C# compiler and .NET framework to non-Microsoft platforms.

Microsoft is no longer using programming languages as a means to drive people to Windows.

That is a significant change. A consequence of that change is a reduction in the importance of programming languages. It may make sense for Microsoft to let other people develop programming languages. Perhaps Microsoft's best strategy is to provide a superior environment for programming and the development of languages.

Microsoft is not the first company to make this transition. IBM did the same with its languages. FORTRAN, PL/I, APL, SQL, and RPG were all invented by IBM and proprietary, usable only on IBM equipment. Today, IBM provides services and doesn't need private programming languages to sell hardware.

Microsoft cannot simply drop C#. What would make sense would be a gradual, planned, transfer to another organization. Look for actions that continue in the direction of open source for C# and .NET.