Thursday, October 29, 2015

Tablet sales

A recent article on ZDnet.com blamed recent lackluster sales of iPads on... earlier iPads. This seems wrong.

The author posed this premise: Apple's iOS 9 runs on just about every iPad (it won't run on the very first iPad model, but it runs on the others) and therefore iPad owners have little incentive to upgrade. iPad owners behave differently from iPhone owners, in that they (the iPad owners) hold on to their tablets longer than people hang on to their phones.

The latter part of that premise may be true. I suspect that tablet owners do upgrade less frequently that phone owners (for Apple or Android camps). While tablets are typically less expensive than phones, iPads are pricey, and iPad owners may wish to delay an expensive purchase. My belief is that people replace phones more readily than tablets because of the relative size of phones and tablets. Tablets, being larger, are viewed as more valuable. The psychology drives us to replace phones faster than tablets. But that's a pet theory.

Getting back to the ZDnet article: There is a hidden assumption in the author's argument. He assumes that the only people buying iPads are previous iPad owners. In other words, everyone who is going to buy an iPad has already purchased one, and the only sames for iPads will be upgrades as a customer replaces an iPad with an iPad. (Okay, perhaps not "only". Perhaps "majority". Perhaps it's "most people buying iPads are iPad owners.)

This is a problem for Apple. It means that they have, rather quickly, reached market saturation. It also means that they are not converting people from Android tablets to Apple tablets.

I don't know the numbers for iPad sales and new sales versus upgrades. I don't know the corresponding numbers for Android tablets either.

But if the author's assumption is correct, and the tablet market has become saturated, it could make things difficult for Apple, Google (Alphabet?), and ... Microsoft. Microsoft is trying to get into the tablet market (in hardware and in software alone). A saturated market would mean little interest in Windows tablets.

Or maybe it means that Microsoft will be forced to offer something new, some service that compels one to look seriously at a Windows tablet.

Sunday, October 25, 2015

Refactoring is almost accepted as necessary

The Agile Development process brought several new ideas to the practice of software development. The most interesting, I think, is the notion of re-factoring as an expected activity.

Refactoring is the re-organization of code, making it more readable and eliminating redundancies. It is an activity that serves the development team; it does not directly contribute to the finished product.

Earlier methods of software development did not list refactoring as an activity. They made the assumption that once written, the software was of sufficient quality to deliver. (Except for defects which would be detected and corrected in a "test" or "acceptance" phase.)

Agile Development, in accepting refactoring, allows for (and encourages) improvements to the code without changing the behavior of the code (that is, refactoring). It is a humbler approach, one that assumes that members of the development team will learn about the code as the write the code and identify improvements.

This is a powerful concept, and, I believe, a necessary one. Too many projects suffer from poor code quality -- the "technical backlog" or "technical debt" that many developers will mention. The poor code organization slows development, as programmers must cope with fragile and opaque code. Refactoring improves code resilience and improves visibility of important concepts. Refactored code is easier to understand and easier to change, which reduces the development time for future projects.

I suspect that all future development methods will include refactoring as a task. Agile Development, as good as it is, is not the perfect method for all projects. It is suited to projects that are exploratory in nature, projects that do not have a specific delivery date for a specific set of features.

Our next big software development method may be a derivative of Waterfall.

Agile Development (and its predecessor, Extreme Programming) was, in many ways, a rebellion against the bureaucracy and inflexibility of Waterfall. Small teams, especially in start-up companies, adopted it and were rewarded. Now, the larger, established, bureaucratic organizations envy that success. They think that adopting Agile methods will help, but I have yet to see a large organization successfully merge Agile practices into their existing processes. (And large, established, bureaucratic organizations will not convert from Waterfall to pure Agile -- at least not for any length of time.)

Instead of replacing Waterfall with Agile, large organizations will replace Waterfall with an improved version of Waterfall, one that keeps the promise of a specific deliverable on a specific date yet adds other features (one of which will be refactoring). I'm not sure who will develop it; the new process (let's give it a name and call it "Alert Waterfall") may come from IBM or Microsoft or Amazon.com, or some other technical leader.

Yet it will include the notion of refactoring, and with it the implicit declaration that code quality is important -- that it has value. And that will be a victory for developers and managers everywhere.

Thursday, October 22, 2015

Windows 10 means a different future for PCs

Since the beginning, PCs have always been growing.  The very first IBM PCs used 16K RAM chips (for a maximum of 64K on the CPU board); these were quickly replaced by PCs with 64K RAM chips (which allowed 256K on the CPU board).

We in the PC world are accustomed to new releases of bigger and better hardware.

It may have started with that simple memory upgrade, but it continued with hard drives (the IBM PC XT), enhanced graphics, higher-capacity floppy disks, and a more capable processor (the IBM PC AT), and an enhanced buss, even better graphics, and even better processors (the IBM PS/2 series).

Improvements were not limited to IBM. Compaq and other manufacturers revised their systems and offered larger hard drives, better processors, and more memory. Every year saw improvements.

When Microsoft became the market leader, it played an active role in the specification of hardware. Microsoft also designed new operating systems for specific minimum platforms: you needed certain hardware to run Windows NT, certain (more capable) hardware for Windows XP, and even more capable hardware for Windows Vista.

Windows 10 may change all of that.

Microsoft's approach to Windows 10 is different from previous versions of Windows. The changes are twofold. First, Windows 10 will see a constant stream of updates instead of the intermittent service packs of previous versions. Second, Windows 10 is "it" for Windows -- there will be no later release, no "Windows 11".

With no Windows 11, people running Windows 10 on their current hardware should be able to keep running it. Windows Vista forced a lot of people to purchase new hardware (which was one of the objections to Windows Vista); Windows 11 won't force that because it won't exist.

Also consider: Microsoft made it possible for just about every computer running Windows 8 or Windows 7 (or possibly Windows Vista) to upgrade to Windows 10. Thus, Windows 10 requires just as much hardware as those earlier versions.

What may be happening is that Microsoft has determined that Windows is as big as it is going to be.

This makes sense for desktop PCs and for servers running Windows.

Most servers running Windows will be in the cloud. (They may not be now, but they will be soon.) Cloud-based servers don't need to be big. With the ability to "spin up" new instances of a server, an overworked server can be given another instance to handle the load. A system can provide more capacity with more servers. It is not necessary to make the server bigger.

Desktop PCs, either in the office or at home, run a lot of applications, and these applications (in Microsoft's plan) are moving to the cloud. You won't need a faster machine to run the new version of Microsoft Word -- it runs in the cloud and all you need is a browser.

It may be that Microsoft thinks that PCs have gotten as powerful as they need to get. This is perhaps not an unreasonable assumption. PCs are powerful and can handle every task we ask of them.

As we shift our computing from PCs and discrete servers to the cloud, we eliminate the need for improvements to PCs and discrete servers. The long line of PC growth stops. Instead, growth will occur in the cloud.

Which doesn't mean that PCs will be "frozen in time", forever unchanging. It means that PC *growth* will stop, or at least slow to a glacial pace. This has already happened with CPU clock frequencies and buss widths. Today's CPUs are about as fast (in terms of clock speed) as CPUs from 2009. Today's CPUs use a 64-bit data path, which hasn't changed since 2009. PCs will grow, slowly. Desktop PCs will become physically smaller. Laptops will become thinner and lighter, and battery life will increase.

PCs, as we know them today, will stay as we know them today.

Sunday, October 18, 2015

More virtual, less machine

A virtual machine, in the end, is really an elaborate game of "let's pretend". The host system (often called a hypervisor), persuades an operating system that a physical machine exists, and the operating system works "as normal", driving video cards that do not really exist and responding to timer interrupts created by the hypervisor.

Our initial use of virtual machines was to duplicate our physical machines. Yet in the past decade, we have learned about the advantages of virtual machines, including the ability to create (and destroy) virtual machines on demand. These abilities have changed our ideas about computers.

Physical computers (that is, the real computers one can touch) often server multiple purposes. A desktop PC provides e-mail, word processing, spreadsheets, photo editing, and a bunch of other services.

Virtual computers tend to be specialized. We build virtual machines often as single-purpose servers: web servers, database servers, message queue servers, ... you get the idea.

Our operating systems and system configurations have been designed around the desktop computer, the one serving multiple purposes. Thus, the operating system has to provide all possible services, including those that might never be used.

But with specialized virtual servers, perhaps we can benefit from a different approach. Perhaps we can use a specialized operating system, one that includes only the features we need for our application. For example, a web server needs an operating system and the web server software, and possibly some custom scripts or programs to assist the web server -- but that's it. It doesn't need to worry about video cards or printing. It doesn't need to worry about programmers and their IDEs, and it doesn't need to have a special debug mode for the processor.

Message queue servers are also specialized, and if they keep everything in memory then they need little about file systems and reading or writing files. (They may need enough to bootstrap the operating system.)

All of our specialized servers -- and maybe some specialized desktop or laptop PCs -- could get along with a specialized operating system, one that uses the components of a "real" operating and just enough of those components to get the job done.

We could change policy management on servers. Our current arrangement sees each server as a little stand-alone unit that must receive policies and updates to those policies. That means that the operating system must be able to receive the policy updates. But we could change that. We could, upon instantiation of the virtual server, build in the policies that we desire. If the policies change, instead of sending an update, we create a new virtual instance of our server with the new policies. Think of it as "server management meets immutable objects".

The beauty of virtual servers is not that they are cheaper to run, it is that we can throw them away and create new ones on demand.

Languages become legacy languages because of their applications

Programming languages have brief lifespans and a limited set of destinies.

COBOL, invented in 1959, was considered passé in the microcomputer age (1977 to 1981, prior to the IBM PC).

Fortran, from the same era as COBOL, saw grudging acceptance and escaped the revulsion given COBOL, possibly because COBOL was specific to accounting and business applications and Fortran was useful for scientific and mathematical applications.

BASIC, created in the mid-1960s, was popular through the microcomputer and PC-DOS ages but did not transition to Microsoft Windows. Its eponymous successor, Visual Basic, was a very different language and it was adored in the Windows era but reviled in the .NET era.

Programming languages have one of exactly two fates: despised as "legacy" or forgotten. Yet it is not the language (its style, syntax, or constructs) that define it as a legacy language -- it is the applications written in the language.

If a language doesn't become popular, it is forgotten. The languages A-0, A-1, B-0, Autocode, Flow-matic, and BACAIC are among dozens of languages that have been abandoned.

If a language does become popular, then we develop applications -- useful, necessary application -- in it. Those useful, necessary applications eventually become "legacy" applications. Once enough of the applications written in a language are legacy applications, the language becomes a "legacy" language. COBOL suffered this fate. We developed business systems in it and those systems are too important to abandon, yet also too complex to convert to another language, so COBOL lives on. But we don't build new systems in COBOL, and we later programmers don't like COBOL.

The true mark of legacy languages is the open disparagement of them. When a sufficient number of developers refuse to work with them (the languages), then they are legacy languages.

Java and C# are approaching the boundary of "legacy". They have been around long enough for enough people to have written enough useful, necessary applications. These applications are now becoming legacy applications: large, difficult to understand, and written in the older versions of the language. It is these applications that will doom C# and Java to legacy status.

I think we will soon see developers declining to learn Java and C#, focussing instead on Python, Ruby, Swift, Haskell, or Rust.

Wednesday, October 14, 2015

The inspiration of programming languages

Avdi posted a blog about programming languages, bemoaning the lack of truly inspirational changes in languages. He says:
... most programming languages I’ve worked with professionally were born from much less ambitious visions. Usually, the vision amounted to “help programmers serve the computer more easily”, or sometimes “be like $older_language, only a little less painful”.
he is looking, instead, for:
systems that flowed out of a distinct philosophy and the intent to change the world
Which got me thinking: Which languages are truly innovative, and which are merely derivative, merely improvements on a previous language? They cannot all be derivatives, because there must have been some initial languages to start the process.

What inspires languages?

Some languages were built for portability. The designers wanted languages to run on multiple platforms. Many languages run on multiple platforms, but few were designed for that purpose. The languages for portability are:
  • Algol (a language designed for use in different cultural contexts)
  • NELIAC (a version of Algol, designed for machine-independent operation)
  • COBOL (the name comes from "Common Business Oriented Language")
  • Ada (specified as the standard for Department of Defense systems)
  • Java ("write once, run everywhere")
  • JavaScript (portable across web browsers)
Other languages were designed for proprietary use:
  • C# (a Java-like language specific to Microsoft)
  • Swift (a language specific to Apple)
  • Visual Basic and VB.NET (specific to Microsoft Windows)
  • SAS (proprietary to SAS Corporation)
  • VBScript (proprietary to Microsoft's Internet Explorer)
A few languages were designed to meet the abilities of new technologies:
  • BASIC (useful for timesharing; COBOL and FORTRAN were not)
  • JavaScript (useful for browsers)
  • Visual Basic (needed for programming Windows)
  • Pascal (needed to implement structured programming)
  • PHP (designed for building web pages)
  • JOSS (useful for timesharing)
BASIC and JOSS may have been developed simultaneously, and perhaps one influenced the other. (There are a number of similarities.) I'm considering them independent projects.

All of these are good reasons to build languages. Now let's look at the list of "a better version of X", where people designed languages to improve an existing language:
  • Assembly language (better than machine coding)
  • Fortran I (a better assembly language)
  • Fortran II (a better Fortran I)
  • Fortran IV (a better Fortran II -- Fortran III never escaped the development lab)
  • S (a better Fortran)
  • R (a better S)
  • Matlab (a better Fortran)
  • RPG (a better version of assembly language, initially designed to generate report programs)
  • FOCAL (a better JOSS)
  • BASIC (a better Fortran, suitable for timesharing)
  • C (a better version of 'B')
  • B (a better version of 'BCPL')
  • BCPL (a better version of 'CPL')
  • C++ (a better version of C)
  • Visual C++ (a version of C++ tuned for Windows, and therefore 'better' for Microsoft)
  • Delphi (a better version of Visual C++ and Visual Basic)
  • Visual Basic (a version of BASIC tuned for Windows)
  • Pascal (a better version of Algol)
  • Modula (a better version or Pascal)
  • Modula 2 (a better version of Modula)
  • Perl (a better version of AWK)
  • Python (a better version of ABC)
  • ISWIM (a better Algol)
  • ML (a better ISWIM)
  • OCaml (a better ML)
  • dBase II (a better RETRIEVE)
  • dBase III (a better dBase II)
  • dBase IV (a better dBase III)
  • Simula (a better version of Algol)
  • Smalltalk-71 (a better version of Simula)
  • Smalltalk-80 (a better version of Smalltalk-71)
  • Objective-C (a combination of C and Smalltalk)
  • Go (a better version of C)
Which is a fairly impressive list. It is also a revealing list. It tells us about our development of programming languages. (Note: the term "better" meant different things to the designers of different languages. "Perl is a 'better' AWK" does not (necessarily) use the same connotation as "Go is a 'better' C".)

We develop programming languages much like we develop programs: incrementally. One language inspires another. Very few languages are born fully-formed, and very few bring forth new programming paradigms.

There are a few languages that are truly innovative. Here are my nominees:

Assembly language A better form of machine coding, but different enough in that it uses symbols, not numeric codes. That change makes it a language.

COBOL The first high-level language as we think of them today, with a compiler and keywords and syntax that does not depend on column position.

Algol The original "algorithmic language".

LISP A language that is not parsed but read; the syntax is that of the already-parsed tree.

Forth A language that uses 'words' to perform operations and lets one easily define new 'words'.

Eiffel A language that used object-oriented techniques and introduced design-by-contract, a technique that is used by very few languages.

Brainfuck A terse language that is almost impossible to read and sees little use outside of the amusement of programmers.

These are, in my opinion, the ur-languages of programming. Everything else is derived, one way or another, from these.

It is not necessary to change the world with every new programming language; we can make improvements by building on what exists. Derived languages are not the mashing of different technologies as shown in "Mad Max" and other dystopian movies. (Well, not usually.) They can be useful and they can be elegant.

Thursday, October 8, 2015

From multiprogramming to timesharing

Multiprogramming boosted the efficiency of computers, yet it was timesharing that improved the user experience.

Multiprogramming allowed multiple programs to run at the same time. Prior to multiprogramming, a computer could run one and only one program at a time. (Very similar to PC-DOS.) But multiprogramming was focussed on CPU utilitization and not on user experience.

To be fair, there was little in the was of "user experience". Users typed their programs on punch cards, placed the deck in a drawer, and waited for the system operator to transfer the deck to the card reader for execution. The results would be delivered in the form of a printout, and users often had to wait hours for the report.

Timesharing was a big boost for the user experience. It built on multiprogramming, running multiple programs at the same time. Yet it also changed the paradigm. Multiprogramming let a program run until an input-output operation, and then switched control to another program while the first waited for its I/O operation to complete. It was an elegant way of keeping the CPU busy, and therefore improving utilization rates.

With timesharing, users interacted with the computer in real time. Instead of punch cards and printouts, they typed on terminals and got their responses on those same terminals. That change required a more sophisticated approach to the sharing of resources. It wouldn't do to allow a single program to monopolize the CPU for minutes (or even a single minute) which could occur with multiprogramming. Instead, the operating system had to frequently yank control from one program and give it to another, allowing each program to run a little bit in each "time slice".

Multiprogramming focussed inwards, on the efficiency of the system. Timesharing focussed outwards, on the user experience.

In the PC world, Microsoft focussed on the user experience with early versions of Windows. Windows 1.0, Windows 2, Windows 3.0, and Windows 95 all made great strides in the user experience. But other versions of Windows focussed not on the user experience but on the internals: security, user accounts, group policies, and centralized control. Windows NT, Windows 2000, Windows XP all contained enhancements for the enterprise but not for the individual user.

Apple has maintained focus on the user, improving (or at least changing) the user experience with each release of Mac OSX. This is what makes Apple successful in the consumer market.

Microsoft focussed on the enterprise -- and has had success with enterprises. But enterprises don't want cutting-edge user interfaces, or GUi changes (improvements or otherwise) every 18 months. They want stability. Which is why Microsoft has maintained its dominance in the enterprise market.

Yet nothing is constant. Apple is looking to make inroads into the enterprise market. Microsoft wants to get into the consumer market. Google is looking to expand into both markets. All are making changes to the user interface and to the internals.

What we lose in this tussle for dominance is stability. Be prepared for changes to the user interface, to update mechanisms, and to the basic technology.

Sunday, October 4, 2015

Amazon drops Chromecast and Apple TV

Amazon.com announced that it would stop selling Chromecast and Apple TV products, a move that has raised a few eyebrows. Some have talked about anti-trust actions.

I'm not surprised by Amazon.com's move, and I am surprised.

Perhaps an explanation is in order.

The old order saw Microsoft as the technology leader, setting the direction for the use of computers at home and in the office. That changed with Apple's introduction of the iPhone and later iPads and enhanced MacBooks. It also changed with Amazon.com's introduction of cloud computing services. Google's rise in search technologies and its introduction of Android phones and tablets also made part of the change.

The new order sees multiple technology companies and no clear leader. As each player moves to improve its position, it, from time to time, blocks other players from working with its technologies. Apple MacBooks don't run Windows applications, and Apple iPhones don't run Android apps. Android tablets don't run iOS apps. Movies purchased through Apple's iTunes won't play on Amazon.com Kindles (and you cannot even move them).

The big players are building walled gardens, locking in user data (documents, music, movies, etc.).

So it's no surprise that Amazon.com would look to block devices that don't serve its purposes, and in fact serve other walled gardens.

What's surprising to me is the clumsiness of Amazon.com's announcement. The move is a bold one, obvious in its purpose. Microsoft, Google, and Apple have been more subtle in their moves.

What's also surprising is Amazon.com's attitude. My reading of the press and blog entries is one of perceived arrogance.

Amazon.com is a successful company. They are well-respected for their sales platform (web site and inventory) and for their web services offerings. But they have little in the way of loyalty, especially in their sales side. Arrogance is something they cannot afford.

Come to think of it, their sales organization has taken a few hits of late, mostly with employee relations. This latest incident will do nothing to win them new friends -- or customers.

It may not cost them customers, at least in the short term. But it is a strategy that I would recommend they reconsider.

Thursday, October 1, 2015

The birth of multiprogramming

Early computers ran one program at a time. They were also slow. This wasn't a problem. At first.

Early computers ran in "batch mode" - a non-interactive mode that often saw input on punch cards or magnetic tape, instead of people typing on terminals (much less smaller computers as we do today).

Companies had programs for each task: a program to update inventory, a program to update sales information, a program to print personnel reports, etc. Each program was a "job" with its program, input data, and output data.

The advantage of batch mode processing is that the job runs as an independent unit and it can be scheduled. Your collection of programs could be planned, as each used specific data, generated specific data, and ran for a (usually) predictable amount of time. Companies would run their daily jobs every day, their weekly jobs perhaps every Saturday, their monthly jobs at the end of the month (or more often during the first days of the next month), and their annual jobs at the end of the year.

If your programs all ran successfully, and within their usual timeframes, you were happy. The problem for companies was that they tended to grow, increasing the number of customers they supported and the number of sales they created. Those increases meant an increase in the size of data for processing, and that meant increased processing time to run their computer jobs.

If you have spare computing time, you simply run jobs longer. But what happens when you don't have spare processing time? What happens when your daily jobs take more than twenty-four hours to run?

In today's world of personal computers and cloud processing, we simply order up some additional computers. That was not possible in the early days of computing: computers were expensive.

Instead of buying (or leasing) more computers, we looked for ways to make computers more efficient. One of the first methods was called "multiprogramming" and it allowed multiple programs to run at the same time.

Successful multiprogramming had a number of challenges: loading multiple programs into memory (at different locations), preventing one program from writing to memory allocated to another, and sharing the processor among the simultaneously executing programs. While these are all tasks that any modern operating system handles, in its day it was a significant change.

It was also successful. It took the time spent waiting for input/output tasks and re-allocated for processing. The result was an increase of processing time, which meant that a company could run more programs without buying a larger (and more expensive) computer.

Multiprogramming shared the processor by using what we call "cooperative multitasking". A program ran until it requested an input/output operation, at which point the operating system initiated the operation and switched the processor to a different program. The input/output operation was handled by a separate device (a card reader or tape reader, or maybe a disk drive) so it could continue without the main processor. This freed the main processor to do some other work in the form of another program.

Windows 95 used a similar technique to switch between programs.

Later operating systems used "pre-emptive task switching", giving programs small amounts of processing time and then suspending one program and activating another. This was the big change for Windows NT.

Multiprogramming was driven by cost reduction (or cost avoidance) and focussed on internal operations. It made computing more efficient in the sense that one got "more computer" for the same amount of hardware. (The operating system had to be more sophisticated, of course.) But it did nothing for the user; it made no changes to the user experience. One still had to schedule jobs to run programs with specific input and output data.

Cost avoidance is one driver for IT. Its focus on internal operations is appropriate. But there are other drivers, and they require other techniques.