Sunday, October 4, 2015

Amazon drops Chromecast and Apple TV announced that it would stop selling Chromecast and Apple TV products, a move that has raised a few eyebrows. Some have talked about anti-trust actions.

I'm not surprised by's move, and I am surprised.

Perhaps an explanation is in order.

The old order saw Microsoft as the technology leader, setting the direction for the use of computers at home and in the office. That changed with Apple's introduction of the iPhone and later iPads and enhanced MacBooks. It also changed with's introduction of cloud computing services. Google's rise in search technologies and its introduction of Android phones and tablets also made part of the change.

The new order sees multiple technology companies and no clear leader. As each player moves to improve its position, it, from time to time, blocks other players from working with its technologies. Apple MacBooks don't run Windows applications, and Apple iPhones don't run Android apps. Android tablets don't run iOS apps. Movies purchased through Apple's iTunes won't play on Kindles (and you cannot even move them).

The big players are building walled gardens, locking in user data (documents, music, movies, etc.).

So it's no surprise that would look to block devices that don't serve its purposes, and in fact serve other walled gardens.

What's surprising to me is the clumsiness of's announcement. The move is a bold one, obvious in its purpose. Microsoft, Google, and Apple have been more subtle in their moves.

What's also surprising is's attitude. My reading of the press and blog entries is one of perceived arrogance. is a successful company. They are well-respected for their sales platform (web site and inventory) and for their web services offerings. But they have little in the way of loyalty, especially in their sales side. Arrogance is something they cannot afford.

Come to think of it, their sales organization has taken a few hits of late, mostly with employee relations. This latest incident will do nothing to win them new friends -- or customers.

It may not cost them customers, at least in the short term. But it is a strategy that I would recommend they reconsider.

Thursday, October 1, 2015

The birth of multiprogramming

Early computers ran one program at a time. They were also slow. This wasn't a problem. At first.

Early computers ran in "batch mode" - a non-interactive mode that often saw input on punch cards or magnetic tape, instead of people typing on terminals (much less smaller computers as we do today).

Companies had programs for each task: a program to update inventory, a program to update sales information, a program to print personnel reports, etc. Each program was a "job" with its program, input data, and output data.

The advantage of batch mode processing is that the job runs as an independent unit and it can be scheduled. Your collection of programs could be planned, as each used specific data, generated specific data, and ran for a (usually) predictable amount of time. Companies would run their daily jobs every day, their weekly jobs perhaps every Saturday, their monthly jobs at the end of the month (or more often during the first days of the next month), and their annual jobs at the end of the year.

If your programs all ran successfully, and within their usual timeframes, you were happy. The problem for companies was that they tended to grow, increasing the number of customers they supported and the number of sales they created. Those increases meant an increase in the size of data for processing, and that meant increased processing time to run their computer jobs.

If you have spare computing time, you simply run jobs longer. But what happens when you don't have spare processing time? What happens when your daily jobs take more than twenty-four hours to run?

In today's world of personal computers and cloud processing, we simply order up some additional computers. That was not possible in the early days of computing: computers were expensive.

Instead of buying (or leasing) more computers, we looked for ways to make computers more efficient. One of the first methods was called "multiprogramming" and it allowed multiple programs to run at the same time.

Successful multiprogramming had a number of challenges: loading multiple programs into memory (at different locations), preventing one program from writing to memory allocated to another, and sharing the processor among the simultaneously executing programs. While these are all tasks that any modern operating system handles, in its day it was a significant change.

It was also successful. It took the time spent waiting for input/output tasks and re-allocated for processing. The result was an increase of processing time, which meant that a company could run more programs without buying a larger (and more expensive) computer.

Multiprogramming shared the processor by using what we call "cooperative multitasking". A program ran until it requested an input/output operation, at which point the operating system initiated the operation and switched the processor to a different program. The input/output operation was handled by a separate device (a card reader or tape reader, or maybe a disk drive) so it could continue without the main processor. This freed the main processor to do some other work in the form of another program.

Windows 95 used a similar technique to switch between programs.

Later operating systems used "pre-emptive task switching", giving programs small amounts of processing time and then suspending one program and activating another. This was the big change for Windows NT.

Multiprogramming was driven by cost reduction (or cost avoidance) and focussed on internal operations. It made computing more efficient in the sense that one got "more computer" for the same amount of hardware. (The operating system had to be more sophisticated, of course.) But it did nothing for the user; it made no changes to the user experience. One still had to schedule jobs to run programs with specific input and output data.

Cost avoidance is one driver for IT. Its focus on internal operations is appropriate. But there are other drivers, and they require other techniques.

Sunday, September 27, 2015

The (eventual) rise of style-checkers

The IT world changes technology and changes practices. We have changed platforms from mainframes to personal computers, from desktop PCs to web servers, and now we're changing from web servers to cloud computing. We've changed programming languages from assembly to Cobol and Fortran, to Basic, to Pascal and C, to C++, to Java and C#. We're now changing to JavaScript and Python.

We change our coding styles, from "unstructured" to structured and from structured to object-oriented.

We also change our development processes. We started with separate tools (editors, compilers, debuggers) and then shifted to the 'IDE' - the integrated development environment. We added version control systems.

The addition of version control is an interesting study. We had the tools for some time, yet we (as an industry) did not use them immediately. I speculate that it was management that spurred the use of version control, and not individual programmers. Version control offers little benefit to the individual; it offers more to managers.

The use of version control systems allows for larger teams and larger projects. Without version control, team members must coordinate their changes very carefully. Files are kept in a common directory, and updates to those files must ensure a consistent set of source code. It is easy for two programmers to start working on different changes. Both begin by copying the common code to their private workspaces. They make changes, and the first one done copies his changes into the common workspace. When the second programmer finishes, he copies his changes to the common area, overwriting the changes from the first programmer. Thus, the work from the first programmer "disappears".

These problems can be avoided by carefully checking prior to copying to the common area. For a small team (less than fifteen) this requires effort and disciple, and is possible. For a larger team, the coordination effort is daunting.

An additional benefit of version control systems: the illusion of accountability. With controlled access and logged activity, project managers could see who made which changes.

Project managers, not individual developers, changed the process to use version control. That's as it should be: project managers are the people to define the process. (They can use input from developers, however.)

We have style checking programs for many languages. A quick search shows style checkers for C, C++, C#, Java, Perl, Python, Ruby, Ada, Cobol, and even Fortran. Some checkers are simple affairs, checking nothing more than indentation and line length. Others are comprehensive, identifying complex modules.Style checkers are not used in a typical development project. The question is: when will style-check programs become part of the normal tool set?

Which is another way of asking: when (if ever) will a majority of development teams use style-check programs as part of their process?

I think that the answer is: yes, eventually, but not soon.

The same logic for version control systems has to apply to style checkers. It won't be individuals who bring style checkers into the process; it will be project managers.

Which means that the project managers will have to perceive some benefit from style checkers. There is a cost to using them. It is a change to the process, and some (many) people are resistant to change. Style checkers enforce a certain style on the code, and some (many) developers prefer their own style. Style checkers require time to run, time to analyze the output, and time to change the code to conform, all of which take time away from the main task of writing code.

The benefits of style checkers are somewhat hazy, and less clear than version control systems. Version control systems fixed a clear, repeating problem ("hey, where did my changes go?"). Style checkers do no such thing. At best, they make code style consistent across the project, which means that the code is easier to read, changes are easier to make, and developers can move from one section of the code to another. Style checkers invest effort now for a payback in the future.

I suspect that the adoption of style checkers will be slow, and while lead by project managers, it will be encouraged by developers. Or some developers. I suspect that the better developers will be more comfortable with style checkers and will want to work on projects with style checkers. It may be that development teams will form two groups: one with style checkers and one without. The development teams that use style checkers will tend to hire other developers who use style checkers, and the development teams that don't use style checkers will tend to hire developers who don't use style checkers. Why? Because a developer in the "wrong" environment will be uncomfortable with the process - and the code. (The same thing happened with version control systems.)

For style checkers to be adopted, I see the following preconditions:
  • The style checkers have to exist (at least for the team's platform and language)
  • The style checkers have to be easy to use
  • The recommendations from style checkers has to be "tuneable" - you can't be swamped with too many messages at once
  • The team has to be willing to improve the code
  • The project management has to be willing to invest time now for payment later
 We have the tools, and they are easy to use (well, some of them). I think teams are willing to improve the code. What we need now is a demonstration that well-maintained code is easier to use in the long term.

Thursday, September 24, 2015

An imaginary Windows version of the Chromebook

Acer and HP have cloudbooks - laptop computers outfitted with Windows and a browser - but they are not really the equivalent of a Chromebook.

A Chromebook is a lightweight laptop computer (in both physical weight and computing power) equipped with a browser and just enough of an operating system to run the browser. (And some configuration screens. And the ssh program.) As such, they have minimal administrative overhead.

Cloudbooks - the Acer and HP versions - are lightweight laptops equipped with the full Windows operating system. Since they have the entire Windows operating system, they have the entire Windows administrative "load".

Chromebooks have been selling well (possibly due to their low prices). Cloudbooks have been selling... well, I don't know. There are only a few models from Acer and a few models from HP; much fewer than the plethora of Chromebooks from numerous manufacturers. My guess is that they are selling in only modest quantities.

Would a true "Windows Chromebook" sell? Possibly. Let's imagine one.

It would have to use a different configuration than the current cloudbooks. It would have to be a lightweight laptop with just enough of an operating system to run the browser. A Windows cloudbook would need a browser (let's pick the new Edge browser) and stripped-down version of Windows that is just enough to run it.

I suspect that the components of Windows are cross-dependent and one cannot easily build a stripped-down version. Creating such a version of Windows would require the re-engineering of Windows. But since this is an imaginary device, let's imagine a smaller, simpler version of Windows.

This Windows cloudbook would have to match the price of the Chromebooks. That should be possible for hardware; the licensing fees for Windows may push the price upwards.

Instead of locally-installed software, everything would run in the browser. To compete with Google Docs, our cloudbook would have Microsoft Office 365.

But then: Who would buy it?

I can see five possible markets: enterprises, individual professionals, home users, students, and developers.

Enterprises could purchase cloudbooks and issue them to employees. This would reduce the expenditures for PC equipment but might require different licenses for professional software. Typical office jobs that require Word and Excel could shift to the web-based versions of those products. Custom software may have to run in virtual desktops accessed through the company's intranet. Such a configuration may make it easier for a more mobile workforce, as applications would run from servers and data would be stored on servers, not local PCs.

Individual professionals might prefer a cloudbook to a full Windows laptop. Then again, they might not. (I suspect most independent professionals using Windows are using laptops and not desktops.) I'm not sure what value the professional receives by switching from laptop to cloudbook. (Except, maybe, a lighter and less expensive laptop.)

Home users with computers will probably keep using them, and purchase a cloudbook only when they need it. (Such as when their old computer dies.)

Students could use cloudbooks as easily as the use Chromebooks.

Developers might use cloudbooks, but for them they would need tools available in their browser. Microsoft has development tools that run in the browser, and so do other companies.

But for any of these users, I see them using a Chromebook just as easily as using a Windows cloudbook. Microsoft Office 365 runs in the Chrome and Firefox browsers on Mac OSX and on Linux. (There are apps for iOS and Android, although limited in capabilities.)

There is no advantage to using a Windows cloudbook  -- even our imaginary cloudbook -- over a Chromebook.

Perhaps Microsoft is working on such an advantage.

Their Windows RT operating system was an attempt at a reduced-complexity configuration suitable for running a tablet (the ill-fated Surface RT). But Microsoft departed from our imagined configuration in a number of ways. The Surface RT:

- was released before Office 365 was available
- used special versions of Word and Excel
- had a complex version of Windows, reduced in size but still requiring administration

People recognized the Surface RT for what it was: a low-powered device that could run Word and Excel and little else. It had a browser, and it had the ability to run apps from the Microsoft store, but the store was lacking. And while limited in use, it still required administration.

A revised cloudbook may get a warmer reception than the Surface RT. But it needs to focus on the browser, not locally-installed apps. It has to have a simpler version of Windows. And it has to have something special to appeal to at least one of the groups above -- probably the enterprise group.

If we see a Windows cloudbook, look for that special something. That extra feature will make cloudbooks successful.

Sunday, September 20, 2015

Derivatives of programming languages

Programmers are, apparently, unhappy with their tools. Specifically, their programming languages.

Why do I say that? Because programmers frequently revise or reinvent programming languages.

If we start with symbolic assembly language as the first programming language, we can trace the development of other languages. FORTRAN, in its beginning, was a very powerful macro assembler (or something like it). Algol was a new vision of a programming language, in part influenced by FORTRAN. Pascal was developed as a simpler language, as compared to Algol.

Changes to languages come in two forms. One is an enhancement, a new version of the same language. For example, Visual Basic had multiple versions, yet it was essentially the same language. FORTRAN changed, from FORTRAN IV to FORTRAN 66 to Fortran 77 (and later versions).

The other form is a new, separate language. C# was based on Java, yet was clearly not Java. Modula and ADA were based on Pascal, yet quite different. C++ was a form of C that had object-oriented programming.

Programmers are just not happy with their languages. Over the half-century of programming, we have had hundreds of languages. Only a small fraction have gained popularity, yet we keep tuning them, adjusting them, and deriving them. And programmers are not unwilling to revamp an existing language to meet the needs of the day.

There are two languages that are significant exceptions: COBOL and SQL. Neither of these have been used (to my knowledge) to develop other languages. At least not popular ones. Each has had new versions (COBOL-61, COBOL-74, Cobol-85, SQL-86, SQL-89, SQL-92, and so on) but none have spawned new, different languages.

There have been many languages that have had a small following and never been used to create a new language. It's one thing for a small language to live and die in obscurity. But COBOL and SQL are anything but obscure. They drive most business transactions in the world. They are used in all organizations of any size. One cannot work in the IT world without being aware of them.

So why is it that they have not been used to create new languages?

I have a few ideas.

First, COBOL and SQL are popular, capable, and well-understood. Both have been developed for decades, they work, and they can handle the work. There is little need for a "better COBOL" or a "better SQL".

Second, COBOL and SQL have active development communities. When a new version of COBOL is needed, the COBOL community responds. When a new version of SQL is needed, the SQL community responds.

Third, the primary users of COBOL and SQL (businesses and governments) tend to be large and conservative. They want to avoid risk. They don't need to take a chance on a new idea for database access. They know that new versions of COBOL and SQL will be available, and they can wait for a new version.

Fourth, COBOL and SQL are domain-specific languages, not general-purpose. They are suited to financial transactions. Hobbyists and tinkerers have little need for COBOL or a COBOL-like language. When they experiment, they use languages like C or Python ... or maybe Forth.

The desire to create a new language (whether brand new or based on an existing language) is a creative one. Each person is driven by his own needs, and each new language probably has different root causes. Early languages like COBOL and FORTRAN were created to let people be more productive. The urge to help people be more productive may still be there, but I think there is a bit of fun involved. People create languages because it is fun.

Wednesday, September 16, 2015

Collaboration is an experiment

The latest wave in technology is collaboration. Microsoft, Google, and even Apple have announced products to let multiple people work on documents and spreadsheets at the same time. For them, collaboration is The Next Big Thing.

I think we should pause and think before rushing into collaboration. I don't say that it is bad. I don't say we should avoid it. But I will say that it is a different way to work, and we may want to move with caution.

Office work on PCs (composing and editing documents, creating spreadsheets, preparing presentations) has been, due to technology, solitary work. The first PCs had no networking capabilities, so work had to be individual. Even with the hardware and basic network support in operating systems, applications were designed for single users.

Yet it was not technology alone that made work solitary. The work was solitary prior to PCs, with secretaries typing at separate desks. Offices and assignments were designed for independent tasks, possibly out of a desire for efficiency (or efficiency as perceived by managers).

Collaboration (on-line, real-time, multiple-person collaboration as envisioned in this new wave of tools) is a different way of working. For starters, multiple people have to work on the same task at the same time. That implies that people agree on the order in which they perform their tasks, and the time they devote to them (or at least the order and time for some tasks).

Collaboration also means the sharing of information. Not just the sharing of documents and files, but the sharing of thoughts and ideas during the composition of documents.

We can learn about collaboration from our experiences with pair programming, in which two programmers sit at one computer and develop a program. The key lessons I have learned are:

  • Two people can share information effectively; three or more are less effective
  • Pair program for a portion of the day, not the entire day
  • Programmers share with multiple techniques: by talking, pointing at the screen, and writing on whiteboards
  • Some pairs of people are more effective than others
  • People need time to transition from solitary-only to pair-programming

I think the same lessons will apply to most office workers.

Collaboration tools may be effective with two people, but more people working on a single task may be, in the end, less effective. Some people may be "drowned out" by "the crowd".

People will need ways to share their thoughts, beyond simply typing on the screen. Programmers working together can talk; people working in a shared word process will need some other communication channel such as a phone conversation or chat window.

Don't expect people to collaborate for the entire day. It may be that some individuals are better at working collaboratively than others, due to their psychological make-up. But those individuals will have been "selected out" of the workforce long ago, due to the solitary nature of office work.

Allow for transition time to the new technique of collaborative editing. Workers have honed their skills at solitary composition over the years. Changing to a new method requires time -- and may lead to a temporary loss of productivity. (Just as transitioning from typewriters to word processors had a temporary loss of productivity.)

Collaboration is a new way of working. There are many unknowns, including its eventual effect on productivity. Don't avoid it, but don't assume that your team can adopt it overnight. Approach it with open eyes and gradually, and learn as you go.

Sunday, September 13, 2015

We program to the interface, thanks to Microsoft

Today's markets for PCs, for smart phones, and for tablets show a healthy choice of devices. PCs, phones, and tablets are all available from a variety of manufacturers, in a variety of configurations. And all run just about everything written for the platform (Android tablets run all Android applications, Windows PCs run all Windows applications, etc.).

We don't worry about compatibility.

It wasn't always this way.

When IBM delivered the first PC, it provided three levels of interfaces: hardware, BIOS, and DOS. Each level was capable of some operations, but the hardware level was the fastest (and some might argue the most versatile).

Early third-party applications for the IBM PC were programmed against the hardware level. This was an acceptable practice, as the IBM PC was considered the standard for computing, against which all other computers were measured. Computers from other vendors used different hardware and different configurations and were thus not compatible. The result was that the third-party applications would run on IBM PCs and IBM PCs only, not on systems from other vendors.

Those early applications encountered difficulties as IBM introduced new models. The IBM PC XT was very close to the original PC, and just about everything ran -- except for a few programs that made assumptions about the amount of memory in the PC. The IBM PC AT used a different keyboard and a different floppy disk drive, and some software (especially those that used copy-protection schemes) would not run or sometimes even install. The EGA graphics adapter was different from the original CGA graphics adapter, and some programs failed to work with it.

The common factor in the failures of these programs was their design: they all communicated directly with the hardware and made assumptions about it. When the hardware changed, their assumptions were no longer valid.

We (the IT industry) eventually learned to write to the API, the high-level interface, and not address hardware directly. This effort was due to Microsoft, not IBM.

It was Microsoft that introduced Windows and won the hearts of programmers and business managers. IBM, with its PS/2 line of computers and OS/2 operating system struggled to maintain control of the enterprise market, but failed. I tend to think that IBM's dual role in supplying hardware and software helped in that demise.

Microsoft supplied only software, and it sold almost no hardware. (It did provide things such as the Microsoft Mouse and the Microsoft Keyboard, but these saw modest popularity and never became standards.) With its focus on software, Microsoft made its operating system run on various hardware platforms (including processors such as DEC's Alpha and Intel's Itanium) and Microsoft focussed on drivers to provide functionality. Indeed, one of the advantages of Windows was that application programmers could avoid the effort of supporting multiple printers and multiple graphics cards. Programs would communicate with Windows and Windows would handle the low-level work of communicating with hardware. Application programmers could focus on the business problem.

The initial concept of Windows was the first step in moving from hardware to an API.

The second step was building a robust API, one that could perform the work necessary. Many applications on PCs and DOS did not use the DOS interface because it was limited, compared to the BIOS and hardware interfaces. Microsoft provided capable interfaces in Windows.

The third step was the evolution of Windows. Windows 3 evolved into Windows 3.1 (which included networking), Windows 95 (which included a new visual interface), and Windows 98 (which included support for USB devices). Microsoft also developed Windows NT (which provided pre-emptive multitasking) and later Windows 2000, and Windows XP.

With each generation of Windows, less and less of the hardware (and DOS) was available to the application program. Programs had to move to the Windows API (or a Microsoft-supplied framework such as MFC or .NET) to keep functioning.

Through all of these changes, Microsoft provided specifications to hardware vendors who used those specifications to build driver programs for their devices. This ensured a large market of hardware, ranging from computers to disk drives to printers and more.

We in the programming world (well, the Microsoft programming world) think nothing of "writing to the interface". We don't look to the hardware. When faced with a new device, our first reaction is to search for device drivers. This behavior works well for us. The early marketing materials for Windows were correct: application programmers are better suited to solving business problems than working with low-level hardware details. (We *can* deal with low-level hardware, mind you. It's not about ability. It's about efficiency.)

Years from now, historians of IT may recognize that it was Microsoft's efforts that lead programmers away from hardware and toward interfaces.