Thursday, April 30, 2015

Files are static, requests are dynamic

The transition from desktop or web applications to mobile/cloud systems is more than the re-organization of programs. It is a change from data sources: desktop and web applications often store data in files, and mobile/cloud systems store data via web services.

Files are static things. They perform no actions by themselves. A program can read the contents of a file and take action on those contents, but it must consume the contents as they exist. The file may contain just the data that a program needs, or it may contain more, or less. For example, a file containing a Microsoft Word document actually contains the text of the document, revisions to the text, information about fonts and formatting, and meta-information about the author.

A program reading the contents of that file must read all of that information; it has no choice. If the task is to extract the text -- and only the text -- the program must read the entire file, revisions and fonts and meta-information included. If we want only the meta-information, the program must read the entire file, text and revisions... you get the idea.

(The more recent DOCX format does isolate the different sets of information, and makes reading a subset of the file easier. The older DOC format required reading and interpreting the entire file to obtain any part of the file.)

Web services, used by mobile/cloud systems, are not static but dynamic. (At least they have the possibility of being dynamic. You can build web services that mimic files, but you probably want the dynamic versions.)

A web service can be dynamic because there is another program processing the request and creating the response. A web service to read a document can do more than simply return the bytes in the document file. It can perform some processing on your behalf. It can accept instructions, such as "give me the text" or "only the meta-information, please". It can do these things on our behalf. We can delegate them to the web service, and our job becomes easier.

Astute readers will observe that my arrangement of dynamic web services does not reduce the work involved, it merely shifts work to different parts of the system. (The web service must still read the entire document, pick out the bits of interest, and send those to us.) That is true. Yet it is also true that once in place, the web service provide an interface to reading (and writing) documents, and we may then choose to change the implementation of that storage.

With document web services in place, our applications are completely ignorant of the storage format and the web services may change that format to suit their needs. A new version of web services may store documents not in files but in databases, or JSON files, or any method appropriate. I'm pretty sure that Google Docs uses this approach, and I suspect Microsoft's Office 365, if not using it now, will use it soon.

Moving from desktop and web to mobile/cloud lets us do many things. It lets us do many things that we do today but differently. Look at the possibilities, and look at the savings in effort and cost.

Monday, April 27, 2015

The smallest possible cloud language

How small can a language be? Specifically, how small can we make a language for cloud computing?

Programs running in the cloud need not do a number of things that traditional programs must do. A program running in the cloud does not interact with a user, for example. (There may be a user directing a mobile app which in turn directs a cloud app, but that is an indirect interaction.)

Cloud programs do not read or write disk files, either. Nor do they access databases (directly).

Here's my list of minimal functions for a cloud program:

  • Accept an incoming web request (possibly with data in JSON)
  • Process the request (that is, operate on the JSON data)
  • Generate web requests to other servers
  • Receive responses from other servers
  • Send a response to the original web request (possibly with data in JSON)

That is the list for the smallest, simplest language for cloud computing. There is a little complexity hidden in the "operate on the JSON data"; the language must handle the values and data structures of JSON. Therefore it must handle numeric values, text values, boolean values, "null", lists, and dictionaries.

But that is it. That is the complete set of operations. The language (and its supporting libraries) does not have to handle dialogs, screen resolutions, responsive design, disk files, databases (those are handled by specialized database servers), or printing. We can remove functions that support all of those operations.

If we're clever, we can avoid the use of "generic" loops ("for i = 0; i < x; i++ ") and use the looping constructs of recent languages such as Python and Ruby ("x.times()" or ."x.each()" ).

So the smallest -- not necessarily ideal, just the smallest -- language for cloud computing may be a reduced version of Python. Or Ruby.

Or perhaps a variant of a functional programming language like Haskell or Erlang. Possibly F#, but with reductions to eliminate unused functions.

Or -- and this is a stretch -- perhaps an extended version of Forth. Forth has little overhead to start, so there is little to remove. It operates on 16-bit numeric values; we would need support for larger numeric values and for text values. Yet it could be done.

  • Our list of candidates for cloud computing are:
  • A reduced version of Python
  • A reduced version of Ruby
  • Haskell
  • Erlang
  • A reduced F#
  • An enhanced Forth

Look for them in future cloud platforms.

Thursday, April 23, 2015

Small programs need small languages

The history of programming languages has been one of expansion. Programming languages start small (think BASIC, Pascal, and C) and expand to provide more capabilities to the programmer (think Visual Basic, ObjectPascal, and C++). Programming languages expand because the programs we write expand.

Computer programs have expanded over the years. Books from the early years of programming (the 1970s) classify programs by size, with small programs consisting of hundreds of lines of code, large programs consisting of tens of thousands of lines, and "humongous" programs consisting of hundreds of thousands of lines of code. Today, we may still classify programs by lines of code, but most commercial applications range in size from hundreds of thousands of lines to tens of millions.

The expansionist effect on programs is tied to their single-computer nature. When a single computer must perform the calculations, then the program it runs must do everything.

Cloud computing breaks that paradigm. With cloud computing, the system may be large, but it consists of many computers providing granular services. That design allows for, and encourages, small programs. (Side note: If you're building a cloud system with large programs, you're doing it wrong.)

Cloud computing uses collections of small programs to assemble systems. Since the programs are small, the programming languages can be -- and should be -- small. That means that our strategy of language development, in which we have (mostly) striven to broaden the capabilities of programming languages, is no longer valid. Our new strategy must be to simplify, and probably specialize, our programming languages.

Monday, April 20, 2015

How long do programming languages stay popular?

Programming languages come and go. Some are more popular than others. Some are popular longer than others.

So just how long do programming languages stay popular?

I mean this in a very specific sense, one that is different from the general popularity of programming languages. The folks at www.tiobe.com have studied popularity in great detail, but they are considering all uses of programming languages.

There are many languages. Some, like COBOL and Fortran have been with us for ages. One can argue that their popularity has extended from the 1950s to today, or over sixty years. But much of the demand for those languages is existing systems. Very few people today (I believe) start new endeavors with COBOL.

If we look at the languages used for start-ups and new projects, we see a pattern for the popularity of languages. That pattern is:

1950s: IBM System/360 Assembly
1960s: IBM System/360 Assembly, COBOL, Fortran
1970s: COBOL, Fortran, RPG
1980s: PC Assembly, BASIC, C
1990s: C++, Visual Basic, Java, Perl
2000s: Java, C#
2010s: Python, Ruby, JavaScript

This list is not comprehensive. I have kept the list small, to focus on the big languages. There were lots of other languages (Dibol, Focal, PL/I, Ada, Awk, Forth, etc) that were used for projects, but they were never mainstream. Apologies if I have omitted your favorite language.

Looking at my selected "winners" of languages, we see a pattern: Languages are popular (for new projects) for a period of perhaps ten to fifteen years. Early languages such as IBM System/360 Assembler and COBOL have longer lives.

BASIC is an interesting case. It had popularity in the 1980s and then a second life as Visual Basic in the 1990s. Its first incarnation was as a general-purpose language for programming the original microcomputers (Apple II, Commodore PET and C-64, Radio Shack TRS-80, etc.). Its second life was as a general-purpose language for Windows.

All of these languages had a heyday of a little more than a decade. COBOL and Fortran fell to BASIC. BASIC mutated into Visual Basic and eventually fell to Java and C#. Perl emerged, had success, and has been replaced by Python.

Which leads us to an interesting point. We're now at Java's twentieth anniversary, and C#'s fifteenth. According to my theory, these languages should be on the wane -- for new projects. Certainly there are lots of existing Java systems and lots of existing C# applications. But what languages are people using for new systems?

Informal conversations with fellow developers indicates that new projects are using neither Java nor C#. Instead, new projects (or start-up companies) are using Python, Ruby, and JavaScript.

It may be, in the near future, that we will see conversion projects from Java and C# to newer languages.

Tuesday, April 14, 2015

Sans keyboard

Will the next big trend be devices without keyboards?

Not simply devices with virtual keyboards, such as those on phones and tablets, but devices without keyboards at all? Are such devices possible?

Well, of course a keyboard-less device is possible. Not only is it possible, we have had several popular devices that were used without keyboards. They include:

The pocket-watch Arguably the first wearable computing device.

The wrist-watch The early, pre-digital watches kept track of time. Some had indicators for the day of month; a few for day of week. Some included stop-watches.

Since we mentioned them, Stop-watches.

Pocket transistor radios Controlled with one dial for volume and another dial for tuning, they kept us connected to news and music.

Apple iPod

Film cameras From the Instamatic cameras that used 126 or 110 type film to Polaroid camera that yielded instant photos to SLR cameras (with dials and knobs for focus, aperture, and shutter speed).

Digital cameras Knobs and buttons, but no keyboard.

Some e-book readers (or e-readers) My Kobo e-reader lets me read books and it has no keyboard. Early Amazon.com Kindle e-readers had no keyboard.

So yes, we can have devices without keyboards.

Keyboard-less devices tend to be used to consume content. All of the devices listed above, except for the cameras, are for the consumption of content.

But can we replace our current tablets with a keyboard-less version? Is it possible to design a new type of tablet, one that does not connect to a bluetooth keyboard or provide a virtual keyboard? I think the answer is a cautious 'yes', as there are several challenges.

We need keyboards on our current tablets to provide information for identity and configuration. A user name, or authentication with a server, as with the initial set-up of an Android tablet or iPad. One needs a keyboard to authenticate apps with their servers (Twitter, Facebook, Yahoo mail, etc.).

But authentication could be handled through other mechanisms. Assuming that we "solve" the authentication "problem", where else do we need keyboards?

A number of places, as it turns out. Tagging friends in Facebook. Generating content for Facebook and Twitter status updates. Recommending people on LinkedIn. Specifying transaction amounts in banking apps. (Dates for transaction, however, can be handled with calendars, which are merely custom-shaped keyboards.)

Not to mention the challenge of changing people's expectations. Moving from keyboard to keyboard-less is no small change, and many will (probably) resist.

So I think keyboards will be with us for a while.

But not necessarily forever.

Sunday, April 12, 2015

Open source takes advantage of abandoned hardware

Of the popular desktop operating systems (Windows, Mac OS, and Linux), Windows has a commanding lead. Mac OS is a distant second, and Linux -- on the desktop -- is a far distant third. Yet Linux has one advantage over Windows and Mac OS.

This advantage was made real to me when I dusted off an old 2006-era MacBook. It is in good condition, and due to Apple's design, still serviceable. Yet it ran Mac OSX 10.4 "Tiger", an operating system that Apple abandoned several years ago. Not only has Apple abandoned the operating system, they have abandoned the hardware. (The good folks at Apple would much prefer that one purchase a new device with a new operating system. That makes sense, as Apple is the in the business of selling hardware.)

Microsoft is not in the business of selling hardware (keyboards and Surface tablets are a very small part of their business) yet they also abandon operating systems and hardware. An old Dell desktop PC, sitting in the corner, runs Windows XP and cannot upgrade to a later system. Microsoft has determined that the processor, memory, and disk combination is not worthy of a later version.

So I have an Apple MacBook that I cannot upgrade to a later version of Mac OS X and a desktop PC that I cannot upgrade to a later version of Windows. While the MacBook was able to run a later version of Mac OS X, those versions are not available. The current version ("Yosemite") won't run on it. The desktop PC is "maxed out" at Windows XP.

Here is where open source has a foothold. Apple will not supply an operating system for the MacBook -- not even the original version anymore. Microsoft
will not supply and operating system for the old Dell desktop PC -- not even the original version. Yet Linux can run on both machines.

I have, in fact, replaced Mac OS X on the MacBook with Ubuntu Linux. I'm using the new combination to type this post, and it works.

Many people may casually discard old computers. Other folks, though, may revive those computers with open source software. Over time, the market share of Linux will grow, if only because Apple and Microsoft have walked away from the old hardware.

Thursday, April 9, 2015

UI stability

The years from 1990 to 2010 were the years of Windows dominance, with a stable platform for computing. Yet this platform was not immune to changes.

Microsoft made several changes to the interface of Windows. The change from Windows 3 to Windows 95 (or Windows NT) was significant. Microsoft introduced better fonts, "3-D" controls, and the "start" menu. Microsoft made more changes in Windows XP, especially in the "home" edition. Windows Vista saw more changes (to compete with Apple) and Windows 8 expanded the "start" menu to full screen with active tiles.

These changes in Windows required users to re-learn the user interface.

Microsoft is not alone. Apple, too, has made changes to the user interfaces for Mac OS and iOS. Ask a long-time user of Mac OS about scrolling, and you may get an earful about a change that reversed the direction for scroll operations.

Contrast these changes to the command line. For Windows, the command line is provided by the "Command Window" or the CMD shell. It is based heavily on the MS-DOS command line, which in turn was based on the CP/M command line, which was based on DEC operating systems. While Microsoft has added a few features over time, the basic command line remains constant. Anyone familiar with MS-DOS 2.0 would be comfortable in the latest Windows command prompt. (Not the Powershell; that is a different beast.)

For Unix or Linux, the command line depends on the shell program in use. There are several shell programs: The C Shell (csh), The Bourne Shell, and the Bourne Again Shell (bash), to name a few. They all do the same thing, each with their own habits, yet each have been consistent over the years.

System administrators and developers often favor the command line. Perhaps because it is powerful. Perhaps because it is terse. Perhaps because it requires little network bandwidth and allows for effective use over low-speed connections. But also perhaps because it has been consistent and stable over the years, and requires little or no ongoing learning.

This may be a lesson for application developers (and mobile designers).

Tuesday, April 7, 2015

Listening to your customers

Conventional wisdom is that a business should listen to its customers and deliver products or services than meet the needs of the customers. It sounds good.

Many companies follow this wisdom. Microsoft has done so. One of the biggest messages that customers tell Microsoft is that they want their existing systems to keep working. They want their e-mail and document systems (Outlook and Word) to keep working. They want their databases to keep working. They want their custom-built software to keep working. And Microsoft complies, going to great lengths to keep new versions of operating systems and applications compatible with previous versions.

Apple, in contrast, does not keep their systems compatible. Apple constantly revises the design of their hardware and software, breaking backwards compatibility. The latest Macbooks use a single connector (a USB type C connector) and drop all of the older connectors (network, power, USB type A, Thunderbolt, etc.). Apple has revised software to drop old features. (Mac OS X "Lion" removed support for 32-bit processors.) Is Apple not listening to its customers?

I believe Apple *is* listening. Apple gets a different message from customers: They want their systems to "just work" and they want their systems to be "cool". Those elements mean that the system design must change (especially to be "cool"). Apple offers new, cool systems; Microsoft offers stability.





Monday, April 6, 2015

Web services use the Unix way

Web services use a very different mindset than the typical Windows application. Windows applications are large, all-encompassing systems that contain everything they need.

Web services, in contrast, are small fractions of a system. A complete system can be composed of web services, but a web service is not a complete system.

We've seen this pattern before. Not in Windows, nor in PC-DOS or MS-DOS, but in Unix (and now Linux).

"The Art of Unix Programming" by E.S. Raymond covers the topic well. The author describes the philosophy of small, connectable programs. Each program does one thing well, and systems are built from these programs. Anyone who has worked with the Unix (or Linux) command line knows how to connect multiple programs into a larger system.

We don't need to discover the properties of good web services. We don't need to re-invent the techniques for desiging good web services. The properties and techniques already exist, in the command-line philosophy of Unix.

Thursday, April 2, 2015

Mobile operating systems break the illusion of control

The mobile operating systems iOS and Android are qualitatively different from previous operating systems like Windows, MacOS, and Linux. They break the illusion that an application has control; this illusion has been with us since the first operating systems. To understand the illusion, and how mobile operating systems are different, we must understand the origins of operating systems.

The dawn of computers saw the hardware emerge from electro-mechanical relays, but no software. The first computers were programmed with wires connecting components to other components. A computer was a custom-purpose device, designed and built to perform one calculation. Shortly after, the "programming wires" were isolated to removable boards, which allowed a "program" to be removed from the computer and stored for later use.

The first programs (in the sense we know them) were sequences of numerical values that could be loaded into a computers memory. They were a softer variant of the wired plug-board in the earlier computers. Building the sequence of numerical values was tedious; one had to understand not only the problem to be solved but also the processor instruction set. These sequences are now called "machine language".

Programmers, being what they are, developed programs to ease the chore of creating the sequences of machine language values. These programs were the first assemblers; they converted symbols into executable sequences. A programmer could work with the much easier to understand symbolic code and convert the symbols to a program when his changes were done.

Up to this point, the operation of the computer has been a simple one. Create a program, insert it into the computer, and let it run. The program instructed the computer, and the computer performed the calculations.

There were no operating systems. It was the computer and the program, alone together in the small universe of computing hardware. The program was in charge and the computer obeyed its instructions. (Blindly and literally, which meant that the programmer had to be precise and complete in his description of the operations. That aspect of programming remains with us today.)

The first operating systems were little more than loaders for programs. Programmers found that the task of loading an executable program was a chore, and programmers, being what they are, creating programs to ease that task. A loader could start with a collection of programs (usually stored in a deck of punch cards), load the first one, let it run, and then load and run the next program.

Of course, the loader was still in memory, and the loaded programs could not use that memory or overwrite the loader. If they did, the loader would be unable to continue. I imagine that the very earliest arrangement worked by an agreement: the loader would use a block of addresses and the loaded programs would use other memory but not the block dedicated to the loader. The running program was still "in charge" of the computer, but it had to honor the "loader agreement".

This notion of "being in charge" is important. It is a notion that has been maintained by operating systems -- up to mobile operating systems. More on that later.

Operating systems grew out of the concept of the loader. They became more powerful, allowing more sharing of the expensive computer. The assumed the following functions:

  • Allocation and protection of memory (you can use only what you are assigned)
  • Control of physical devices (you must request operations through the operating system)
  • Allocation of CPU (time slices)

These are the typical functions we associate with operating systems.

Over the years, we have extended operating systems and continued to use them. Yet in all of that time, from IBM's System/360 to DEC's VMS to Microsoft's Windows, the understanding is that our program (our application), once loaded, is "in control" until it exits. This is an illusion, as our application can do very little on its own. It must request all resources from the operating system, including memory. It must request all actions through the operating system, including the operating on devices (display a window on a screen, send text to a printer, save a file to disk).

This illusion persists, I believe due to the education of programmers (and system operators, and computer designers). Not merely through formal training but also through informal channels. Our phrases indicate this: "Microsoft Word runs and prints a document" or "Visual Studio builds the executable" or "IIS serves the HTML document". Our conversations reinforce the belief that the running computer is in control.

And here is where the mobile operating systems come in. Android and iOS have very clear roles for application programs, and those roles are subservient to the operating system. A program does not run in the usual sense, making requests of the operating system. Instead, the app (and I will use the term "app" to indicate that the program is different from an application) is activated by the operating system when needed. That activation sees the operating system instruct the app to perform a task and then return control to the operating system.

Mobile operating systems turn the paradigm of the "program in control" inside-out. Instead of the program making requests of the operating system, the operating system makes requests of the program. The operating system is in control, not the program.

This view is very different from our traditional view, yet it is an accurate one. Apps are not in control. Applications are not in control -- and have not been for many years.