Sunday, June 11, 2017

Apple's Files App is an admission of imperfection

When Apple introduced the iPhone, they introduced not just a smart phone but a new approach to computing. The iPhone experience was a new, simpler experience for the user. The iPhone (and iOS) did away with much of the administrative work of PCs. It eliminated the notion of user accounts and administrator accounts. Updates were automatic and painless. Apps knew how to get their data. The phone "just worked".

The need for a Files app is an admission that the iPad experience does not meet those expectations. It raises the hood and allows the user to meddle with some of the innards of the iPhone. One explanation for its existence is that Apps cannot always find the needed files, and the Files App lets you (the user) find those files.

Does anyone see the irony in making the user do the work that the computer should do? Especially a computer from Apple?

To be fair, Android has had File Manager apps for years, so the Android experience does not meet those expectations either. Microsoft's Surface tablets, starting with the first one, have had Windows Explorer built in, so they are failing to provide the new, simpler experience too.

A curmudgeon might declare that the introduction of the Files App shows that even Apple cannot provide the desired user experience, and if Apple can't do it then no one can.

I'm not willing to go that far.

I will say that the original vision of a simple, easy-to-use, reliable computing device still holds. It may be that the major players have not delivered on that vision, but that doesn't mean the vision is unobtainable.

It may be that the iPhone (and Android) are steps in a larger process, one starting with the build-it-yourself microcomputers of the mid 1970s, passing through IBM PCs with DOS and later PC-compatibles with Windows, and currently with iPhones and tablets. Perhaps we will see a new concept in personal computing, one that improves upon the iPhone experience. It may be as different from iPhone and Android as those operating systems are from Windows and MacOS. It may be part of the "internet of things" and expand personal computing to household appliances.

I'm looking forward to it.

Monday, June 5, 2017

Better programming languages let us do more -- and less

We tend to think that better programming languages let us programmers do more. Which is true, but it is not the complete picture.

Better languages also let us do less. They remove capabilities. In doing so, they remove the possibility for errors.

PL/I was better than COBOL and FORTRAN because it let us write free-form source code. In COBOL and FORTRAN, the column in which code appeared was significant. The restrictions were from the technology of the time (punch cards) but once in the language they were difficult to remove.

BASIC was better than FORTRAN because it eliminated FORMAT specifications. FORMAT specifications were necessary to parse input data and format output data. They were precise, opaque, and easy to get wrong. BASIC, with no such specifications, removed the possibility of errors from such specifications. BASIC also fixed the DO loops of FORTRAN and removed restrictions on subscript form. (In FORTRAN, a subscript could not be an arbitrary expression but had to have the form A*B+C. Any component could be zero and omitted so A+C was allowed, as was A*B. But you could not use A+B+C or A/2.)

Pascal was better than BASIC because it limited the use of GOTO statements. In BASIC, you could use a GOTO to transfer control to any other part of the program, including in and out of loops or subroutines. It made for "spaghetti code" which was difficult to understand and debug. Pascal put an end to that, with a constrained form of GOTO.

Java eliminated the need for the explicit 'delete' or 'free' operations on allocated memory. You cannot forget the 'delete' operation -- you can't write one at all! The internal garbage collector recycles memory. In Java, it is much harder to create memory leaks than in C++ and C.

Python forces us to consider indentation as part of the code. In C, C++, Java, and C#, you can write:

initialize();

if (some_condition)
    do_something();
    do_another_thing();

complete_the_work();

But the code acts in a way you may not expect. Python's use of indentation to specify code organization makes the code clearer. The Python code:

initialize()

if some_condition:
    do_something()
    do_another_thing()

complete_the_work()

does what you expect.

New programming languages do provide new capabilities. (Often, they are refinements to constructs and concepts that were implemented roughly in earlier programming languages.) A new programming language is a combination of new things we can do and old things we no longer need to do.

When considering a new language (or reviewing the current language for a project), keep in mind not only the things that a new language lets you do, but also the things that it won't let you do.

The Demise of Apple

Future historians will look back at Apple, point to a specific moment, and say "Here, at this point, is when Apple started its decline. This event started Apple's fall.". That point will be the construction of their new spaceship-inspired headquarters.

Why do I blame their new building? I don't, actually. I think others -- those future historians -- will. They will get the time correct, but point to the wrong event.

First things first. What do I have against Apple's shiny new headquarters?

It's round.

Apple's new building is large, elegant, expensive, and ... the wrong shape. It is a giant circle, or wheel, or doughnut, and it works poorly with human psychology and perception. The human mind works better with a grid than a circle.

Not that humans can't handle circular objects. We can, when they are small or distant. We have no problem with the moon being round, for example. We're okay with clocks and watches, and old-style speedometers in cars.

We're good when we are looking at the entire circle. Watches and clocks are smaller than us, so we can view the entire circle and process it. (Clocks in towers, such as the "Big Ben" clock in London or the center of town, are also okay, since we view them from a distance and they appear small.)

The problems occur when we are inside the circle, when we are navigating along the circumference. We're not good at keeping track of gradual changes in direction. (Possibly why so many people get lost in the desert. They travel in a circle without realizing it.)

Apple's building looks nice, from above. I suspect the experience of working inside the building will be one of modest confusion and discomfort. Possibly at such a minor level that people do not realize that something is wrong. But this discomfort will be significant, and eventually people will rebel.

It's ironic that Apple, the company that designs and builds products with the emphasis on "easy to use", got the design of their building wrong.

So it may be that historians, looking at Apple's (future) history, blame the design of the new headquarters for Apple's (future) failures. They will (rightly) associate the low-level confusion and additional brain processing required for navigation of such a building as draining Apple's creativity and effectiveness.

I think that they (the historians) will be wrong.

The building is a problem, no doubt. But it won't cause Apple's demise. The true cause will be overlooked.

That true cause? It is Apple's fixation on computing devices.

Apple builds (and sells) computers. They are the sole company that has survived from the 1970s microcomputer age. (Radio Shack, Commodore, Cromemco, Sol, Northstar, and the others left the market decades ago.) In that age, microcomputers were stand-alone devices -- there was no internet, no ethernet, no communication aside from floppy disks and a few on-line bulletin board systems (BBS) that required acoustic coupler modems. Microcomputers were "centers of computing" and they had to do everything.

Today, computing is changing. The combination of fast and reliable networks, cheap servers, and easy virtual machines allows the construction of cloud computing, where processing is split across multiple processors. Google is taking advantage of this with its Chromebooks, which are low-end laptops that run a browser and little else. The "real" processing is performed not on the Chromebook but on web servers, often hosted in the cloud. (I'm typing this essay on a Chromebook.)

All of the major companies are moving to cloud technology. Google, obviously, with Chromebooks and App Engine and Android devices. Microsoft has its Azure services and versions of Word and Excel that run entirely in the cloud, and they are working on a low-end laptop that runs a browser and little else. It's called the "Cloudbook" -- at least for now.

Amazon.com has its cloud services and its Kindle and Fire tablets. IBM, Oracle, Dell, HP, and others are moving tasks to the cloud.

Except Apple. Apple has no equivalent of the Chromebook, and I don't think it can provide one. Apple's business model is to sell hardware at a premium, providing a superior user experience to justify that premium. The superior user experience is possible with local processing and excellent integration of hardware and software. Apps run on the Macs, MacBooks, and iPhones. They don't run on servers.

A browser-only Apple laptop (a "Safaribook"?) would offer little value. The Apple experience does not translate to web sites.

When Apple does use cloud technology, they use it as an accessory to the PC. The processing for Siri is done in a a big datacenter, but its all for Siri and the user experience. Apple's iCloud lets users store data and synchronize it across devices, but it is simply a big, shared disk. Siri and iCloud make the PC a better PC, but don't transform the PC.

This is the problem that Apple faces. It is stuck in the 1970s, when individual computers did everything. Apple has made the experience pleasant, but it has not changed the paradigm.

Computing is changing. Apple is not. That is what will cause Apple's downfall.

Wednesday, May 31, 2017

How many computers?

Part of the lore of computing discusses the mistakes people make in predictions. Thomas J. Watson (president of IBM) predicted the need for five computers -- worldwide. Ken Olson, founder and president of DEC, thought that no one would want a computer in their home.

I suspect that the we repeat these stories for the glee that they bring. What could be more fun than seeing important, well-known people make predictions that turn out to be wrong.

Microsoft's goal, in contrast to the above predictions, was a computer in every home and on every desk, and each of them running Microsoft software. A comforting goal for those who fought in the PC clone wars against the mainframe empire.

But I'm not sure that T. J. Watson was wrong.

Now, before you point out that millions (billions?) of PCs have been sold, and that millions (billions?) of smartphones have been sold, and that those smartphones are really computers, here me out.

Computers are not quite what we think they are. We tend to think of them as small, stand-alone, general-purpose devices. PCs, laptops, smartphones, tablets... they are all computers, right?

Computers today are computing devices, but the border is not so clear. Computers are useful when they are part of a network, and connected to the internet. A computer that is not connected to the internet is not so useful. (Try an experiment: Take any computer, smartphone, or tablet and disconnect it from the network. Now use it. How long before you become bored?)

Without e-mail, instant messages, and web pages, computers are not that interesting -- or useful.

The boxes we think of as computers are really only parts of a larger construct. That larger construct is built from processors and network cards and communication equipment and disks and server rooms and software and protocols. That larger "thing" is the computer.

In that light, we could say that the entire world is running on one "computer" which happens to have lots of processors and multiple operating systems and many keyboards and displays. Parts of this "computer" are powered at different times, and sometimes entire segments "go dark" and then return. Sometimes individual components fail and are discarded, like dead skin cells. (New components are added, too.)

So maybe Mr. Watson was right, in the long run. Maybe we have only one computer.

Monday, May 29, 2017

Microsoft's GVFS for git makes git a different thing

Microsoft is rather proud of their GVFS filesystem for git, but I think they don't understand quite what it is that they have done.

GVFS, in short, changes git into a different thing. The plain git is a distributed version control system. When combined with GVFS, git becomes... well, let's back up a bit.

A traditional, non-distributed version control system consists of a central repository which holds files, typically source code. Users "check out" files, make changes, and "check in" the revised files. While users have copies of the files on their computers, the central repository is the only place that holds all of the files and all of the revisions to the files. It is the one place with all information, and is a single point of failure.

A distributed version control system, in contrast, stores a complete set of files and revisions on each user's computer. Each user has a complete repository. A new user clones a repository from an existing team member and has a a complete set of files and revisions, ready to go. The repositories are related through parent-child links; the new user in our example has a repository that is a child of the cloned repository. Each repository is a clone, except for the very first instance, which could be considered the 'root' repository. The existence of these copies provides redundancy and guards against a failure of the central repository in traditional version control systems.

Now let's look at GVFS and how it changes git.

GVFS replaces the local copy of a repository with a set of virtual files. The files in a repository are stored in a central location and downloaded only when needed. When checked in, the files are uploaded to the central location, not the local repository (which doesn't exist). From the developer's perspective, the changes made by GVFS are transparent. Git behaves just as it did before. (Although with GVFS, large repositories perform better than with regular git.)

Microsoft's GVFS changes the storage of repositories. It does not eliminate the multiple copies of the repository; each user retains their own copy. It does move those copies to the central server. (Or servers. The blog entry does not specify.)

I suppose you could achieve the same effect (almost) with regular git by changing the location of the .git directory. Instead of a local drive, you could use a directory on an off-premise server. If everyone did this, if every stored their git repository on the same server (say, a corporate server), you would have something similar to git with GVFS. (It is not exactly the same, as GVFS does some other things to improve performance.)

Moving the git repositories off of individual, distributed computers and onto a single, central server changes the idea of a distributed version control system. The new configuration is something in between the traditional version control system and a distributed version control system.

Microsoft had good reason to make this change. The performance of standard git was not acceptable for a very large team. I don't fault them for it. And I think it can be a good change.

Yet it does make git a different creature. I think Microsoft and the rest of the industry should recognize that.

Sunday, May 21, 2017

Parallel processing on the horizon

Parallel processing has been with us for years. Or at least attempts at parallel processing.

Parallel processing has failed due the numerous challenges it faces. It requires special (usually expensive) hardware. Parallel processing on convention CPUs is simply processing items serially, because conventional CPUs can process only serially. (Multi-core processors address this problem to a small degree.) Parallel processing requires support in compilers and run-time libraries, and often new data structures. Most importantly, parallel processing requires tasks that are partitionable. The classic example of "nine women producing a baby in one month" highlights a task that is not partitionable, not divisible, into smaller tasks.

Cloud computing offers a new twist on parallel processing.

First, it offers multiple processors. Not just multiple cores, but true multiple processors -- as many as you would like.

Second, it offers these processors cheaply.

Cloud computing is a platform that can handle parallel processing -- in some areas. It has its problems.

First, creating new cloud processing systems is expensive in terms of time. A virtual machine must be instantiated, started, and given software to handle the task. Then, data must be shipped to the server. After processing, the result must be sent back, or forward to another processor. The time for all of these tasks is significant.

Second, we still have the problems of partitioning tasks and representing the data and operations in a program.

There is one area of development that I believe is ready to leverage parallel processing. That area is testing.

The typical testing effort for a project can have multiple levels: unit tests, component tests, system tests, end-to-end tests, you name it. But each level of testing follows the same general pattern:

  • Get a collection of tests, complete with input data and expected results
  • For each test
  • 1) Set up a test environment (program and data)
  • 2) Run the test
  • 3) Compare output to expected output
  • 4) Record the results
  • Summarize the results and report

In this process, the sequence of steps I've labelled 1 through 4 is repeated for each test. Traditional testing puts all of these tests on one computer, performing each test in sequence. Parallel testing can put each test on its own cloud-based processor, effectively running all tests at once.

Testing has a series of well-defined and partitionable tasks. Modern testing methods use automated tests, so a test can run locally or remotely (as long as it has access to everything it needs). Testing can be a drain on resources and time, requiring lots of requests to servers and lots of time to complete all tests.

Testing in the cloud, and in parallel, addresses these issues. It reduces the time for tests and improves the feedback to developers. Cloud processing is cheap -- at least cheaper than paying developers to wait for tests to run.

I think one the next "process improvements" for software development will be the use of cloud processing to run tests. Look for new services and changes to testing frameworks to support this new mode of testing.

Thursday, May 18, 2017

An echo of Wordstar

In 1979, Wordstar was the popular word processor of the time. It boasted "what you see is what you get" (WYSIWYG) because it would reformat text on the screen as you typed.

Wordstar had a problem on some computers. It would, under the right conditions, miss characters as you were typing. The problem was documented in an issue of "Personal Computing", comparing Wordstar to another program called "Electric Pencil". The cause was the automatic reformatting of text. (The reformatting took time, and that's when characters were lost. Wordstar was busy redrawing text and not paying attention to the keyboard.)

At the time, computers were primitive. They ran CP/M on an 8080 or Z-80 processor with at most 64K RAM. Some systems used interrupts to detect keystrokes but others simply polled the keyboard from time to time, and it was easy to miss a typed character.

So here we are in 2017. Modern PCs are better then the early microcomputers, or so we like to think. We have more memory. We have larger disks for storage. We have faster processors. They cost less. Better in every way.

So we like to think.

From a blog in 2017:
Of course, I always need to switch over to Emacs to get real work done.  IntelliJ doesn't like it when you type fast.  Its completions can't keep up and you wind up with half-identifiers everywhere. 
I'm flabbergasted.

(I must also note that I am not a user of IntelliJ, so I have not seen this behavior myself. I trust the report from the blogger.)

But getting back to being flabbergasted...

We have, in 2017, an application that cannot keep up with human typing?

We may have made less progress than we thought.