Tuesday, December 30, 2014

Agile is not for everyone

Agile development is a hot topic for many project managers. It is the latest management fad, following total cost of ownership (TCO) and total quality management (TQM). (Remember those?)

So here is a little bit of heresy: Agile development methods are not for every project.

To use agile methods effectively, one must understand what they offer -- and what they don't.

Agile methods make one guarantee: that your system will always work, for the functionality that has been built. Agile methods (stakeholder involvement, short development cycles, automated testing) ensures that the features you build will work as you expect. Even as you add new features, your automated tests ensure that old features still work. Thus, you can always send the most recent version of your system to your customers.

But agile methods don't make very promise. Specifically, they don't promise that all of your desired features will be available (and working) on a specific date. You may have a list of one hundred features; agile lets you implement those features in a desired order but does not guarantee that they will all be completed by an arbitrary date. The guarantee is only that the features implemented by that date will work. (And you cannot demand that all features be implemented by that date -- that's not agile development.)

Agile does let you make projections about progress, once you have experience with a team, the technology, and a set of features for a system. But these projections must be based on experience, not on "gut feel". Also, the projections are just that: projections. They are estimates and not guarantees.

Certain businesses want to commit to specific features on specific dates, perhaps to deliver a system to a customer. If that is your business, that you should look carefully at agile methods and understand what they can provide. It may be that the older "waterfall" methods, which do promise a specific set of features on a specific date, are a better match.

Friday, December 26, 2014

Google, Oracle, and Java

Apple has a cozy walled garden for its technology: Apple devices running Apple operating systems and Apple-approved apps written in Apple-controlled languages (Objective-C and now Swift).

Microsoft is building a walled garden for its technology. Commodity devices with standards set by Microsoft, running Microsoft operating systems and apps written in Microsoft-controlled languages (C#, F#, and possibly VB.NET). Microsoft does not have the same level of control over applications as Apple; desktop PCs allow anyone with administrator privileges to install any app from any source.

Google has a walled garden for its technology (Android), but its control is less than that of Apple or Microsoft. Android runs on commodity hardware, with standards set by Google. Almost anyone can install apps on their Google phone or tablet. And interestingly, the Android platform apps run in Java, a language controlled by... Oracle.

This last aspect must be worrying to Google. Oracle and Google have a less than chummy relationship, with lawsuits about the Java API. Basing a walled garden on someone else's technology is risky.

What to do? If I were Google, I would consider changing the language for the Android platform. That's not a small task, but the benefits may outweigh the costs. Certainly their current apps would have to be re-written for the New language. A run-time engine would have to be included in Android. The biggest task would be convincing the third-party developers to change their development process and their existing apps. (Some apps may never be converted.)

Which language to pick? That's an easy call. It should be a language that Google controls: Dart or Go. Dart is designed as a replacement for JavaScript, yet could be used for general applications. Go is, in my opinion, the better choice. It *is* designed for general applications, and includes support for concurrency.

A third candidate is Python. Google supports Python in their App Engine cloud platform, so they have some familiarity with it. No one company controls it (Java was controlled by Sun prior to Oracle) so it is unlikely to be purchased.

Java was a good choice for launching the Android platform. I think the languages Go and Python are better choices for Android now.

Let's see what Google thinks.

Sunday, December 21, 2014

Technology fragmentation means smaller, simpler systems

In the past, IT shops standardized their technologies, often around the vendor deemed the industry leader. In the 1960s and 1970s, that leader was IBM. They offered products for all of your computing needs, from computers to terminals to printers and even punch cards.

In the 1980s and 1990s, it was Microsoft. They offered products for all of your computing needs, from operating systems to compilers to office suites to user management. (Microsoft offered little in hardware, but then hardware was considered a commodity and available from multiple sources.)

Today, things are not so simple. No one vendor that provides products and services for "all of your computing needs". The major vendors are Microsoft, Apple, Amazon.com, Google, and a few others.

Microsoft has a line of offerings, but it is weak in the mobile area. Sales of Microsoft tablets, Microsoft phones, and Windows Mobile are anemic. Anyone who wants to offer services in the mobile market must deal with either Apple or Google (and preferably both, as neither has a clear lead).

Apple has a line of offerings, but is weak in the enterprise area. They offer tools for development of apps to run on their devices but little support for server-side development. Anyone who wants to offer services that use server-side applications must look to Microsoft or Google.

Amazon.com offers cloud services and consumer devices (the Kindle) but is weak on development tools and transaction databases. Google offers cloud services and consumer devices as well, but lacks the enterprise-level administration tools.

Complicating matters is the plethora of open-source tools, many of which are not tied to a specific vendor. The Apache web server, the Perl and Python languages, several NoSQL databases, and development tools are available but not with the blessings (and support) from vendors.

Development teams must now cope with the following:

Browsers: Internet Explorer, Chrome, Firefox, Safari, and possibly Opera
Desktop operating systems: Windows (versions 7, 8, and 10), MacOS X, Linux (Ubuntu, SuSE, and Red Hat)
Platforms: desktop, tablet, phone
Mobile operating systems: iOS, Android, and possibly Blackberry and Windows
Database technologies: SQL and NoSQL
HTTP servers: Apache, NGINX, and IIS
Programming languages: C#, Java, Swift, Python, Ruby, and maybe C++ or C
Cloud platforms: Amazon.com AWS, Microsoft Azure, Google cloud
Cloud paradigms: public cloud, private cloud, or hybrid

I find this an impressive list. You may have some factors of your own to add. (Then the list is even more impressive.)

This fragmentation of technology affects your business. I can think of several areas of concern.

The technology for your systems You have to decide which technologies to use. I suppose you could pick all of them, using one set of technology for one project and another set of technology for another project. That may be useful in the very short term, but may lead to an inconsistent product line. For example, one product may run on Android phones and tablets (only), and another may run on Apple phones and tablets (only).

Talent for that technology Staffing teams is an on-going effort. If your project uses HTML 5, CSS, JavaScript, and Python with a NoSQL database, you will need developers with that set of skills. But developers don't know everything (even though some may claim that they do) and you may find few with that exact set of technology. Are you willing to hire someone without on of your desired skills and let that person learn them?

Mergers and acquisitions Combining systems may be tricky. If you acquire a small firm that uses native Android apps and a C#/.NET server-side system, how do you consolidate that system into your HTML, CSS, JavaScript, Python shop? Or do you maintain two systems with distinct technologies? (Refer to "inconsistent offerings", above.)

There are no simple answers to these questions. Some shops will standardize on a set of technologies, combining offerings from multiple vendors. Some will standardize on a vendor, with the hope of a future industry leader that sets the standard for the market. Many will probably have heated arguments about their selections, and some individuals may leave, staying more loyal to the technology than the employer.

My advice is to keep informed, set standards when necessary, and keep systems small and simple. Position your technology to shift with changes in the industry. (For example, native apps on Apple devices will shift from Objective-C to Swift.) If your systems are large and complicated, redesign them to be smaller and simpler.

Build and maintain systems with the idea that they will change.

They probably will. Sooner than you think.

Tuesday, December 16, 2014

Goodbye, Dr. Dobbs

Today the Dr. Dobbs website, sequel of the august publication of the same name, announced that it was going out of business.

It is an announcement that causes me some sadness. Dr. Dobbs was the last of the "originals", the publications of the Elder Days before the web, before the internet, before Windows, and even before the IBM PC.

It was a very different time. Computers were slow, low-powered, expensive, and rare. The personal computers at the time used processors that ran at 1 MHz or 2 MHz, had (perhaps) 64KB of RAM, and often stored data on cassette tapes. A typical computer system cost anywhere from $1000 to $4000 (in 1980 dollars!).

Magazines like Dr. Dobbs were the lifeblood of the industry. They provided news, product announcements, reviews, and articles on theory and on practice. In a world without the internet and web sites, magazines were the way to learn about these strange new devices called computers.

Today, computers are fast, powerful, cheap, and common. So common and so cheap that people leave working computers in the trash. So fast and powerful that we no longer care (much) about the processor speed or memory size.

Not only are computers plentiful, but information about computers is plentiful. Various web sites provide news, opinion, and technical information. We don't need a single "go to" site for all of that information; Google can find it for us.

So, goodbye to Dr Dobbs. You served us well. You helped us through a difficult time, and shared information with many. You were one of the factors in the success of those early days, and therefore the success of the PC industry today. You will be missed.

Sunday, December 14, 2014

Software doesn't rot - or does it?

Is it possible for software to rot? At first glance, it seems impossible. Software is one of the more intangible constructs of man. It is not exposed to the elements. It does not rust or decompose. Software, in its executable form, is nothing more than digitally-stored bits. Even its source form is digitally-stored bits.

The hardware on which those bits are stored may rot. Magnetic tapes and disks lose their field impressions, and their flexibility, over time. Flash memory used in USB thumb drives lasts a long time, but it too can succumb to physical forces. Even punch cards can burn, become soggy, and get eaten by insects. But software itself, being nothing more than ideas, lasts forever.

Sort of.

Software doesn't rot, rust, decompose, or change. What changes is the items that surround the software, that interact with the software. These change over time. The changes are usually small and gradual. When those item change enough to affect the software, we notice.

What are these items?

User expectations Another intangible changing relative to the intangible software. Users, along with product owners, project managers, support personnel, and others, all have expectations of software. Those expectations are formed not just from the software and its operating environment, but also from other software performing similar (or different) tasks.

The discovery of defects Software is complex, and more software has some number of defects. We attempt to build software free of defects, but the complexity of computer systems (and the business rules behind computer systems) makes that difficult. Often, defects are built in to the system and remain unnoticed for weeks, months, or even years. The discovery of a defect is a bit of "rot".

The business environment New business opportunities, new markets, and new product lines can cause new expectations of software. Changes in law, from new regulations to a different rate for sales tax, can affect software.

Hardware Software tends to outlive hardware. Moving from one computer to a later model can expose defects. Games for the original IBM PC failed (or worked poorly) on the IBM PC with its faster processor. Microsoft Xenix ran on the Intel 80286 but not the 80386 because it used reserved flags in the processor status word. (The 80286 allowed the use of reserved bits; the 80386 enforced the rules.) The install program for Wordstar, having worked for years, would fail on PCs with more than 512K of RAM (this was in the DOS days), a lurking defect exposed by a change in hardware.

The operating system New versions of an operating system can fix defects in the old version. If application software took advantage of that defect (perhaps to improve performance) then the software fails on the later version. Or a new version implies new requirements, such as the requirements for branding your software "Windows-95 ready". (Microsoft imposed several requirements for Windows 95 and many applications had to be adjusted to meet these rules.)

The programming language Microsoft made numerous changes to Visual Basic. Many new versions broke the code from previous versions, causing developers to scramble and devote time to incorporating the changes. The C++ and C# languages have been stable, but even they have had changes.

Paradigm shifts We saw large changes when moving from DOS to Windows. We saw large changes when moving from desktop to web. We're seeing large changes with the move from web to mobile. We want software to operate in the new paradigm, but new paradigms are quite different from old ones and the software must change (often drastically).

The availability of talent Languages and programming technologies rise and fall in popularity. COBOL was once the standard language of business applications; today there are few who know it and fewer who who teach it. C++ is following a similar path, and the rise of NoSQL databases means that SQL will see a decline in available talent. You can stay with the technology, but getting people to work on it will become harder -- and more expensive.

All of these factors surround software (with the exception of lurking defects). The software doesn't change, but these things do, and the software moves "out of alignment" with the needs and expectations of the business. Once out of alignment, we consider the software to be "defective" or "buggy" or "legacy".

Our perception is that software "gets out of alignment" or "rots". It's not quite true -- the software has not changed. The surrounding elements have changed, including our expectations, and we perceive the changes as "software rot".

So, yes, software can rot, in the sense that it does not keep up with our expectations.

Tuesday, December 9, 2014

Open source .NET is less special and more welcoming

The Microsoft "toolchain" (the CLR, the .NET framework libraries, and the C# compiler) were special. They were Microsoft's property, guarded jealously and subject to Microsoft's whims. They were also the premiere platform and tools for development in Windows and for Windows. If you were serious about application development (for Windows), you used the Microsoft tools.

There were other toolchains. The Java set includes the JVM and the Java compiler. The major scripting languages (Perl, Python, Ruby, and PHP) each have their own runtime engines and class libraries. None were considered special in the way that the Microsoft toolchain was special. (The other toolchains were -- and still are -- considered good, and some people considered them superior to the Microsoft toolchain, but even most non-Microsoft fans would admit that the Microsoft toolchain was of high quality.)

Microsoft's announcement to open the .NET framework and the C# compiler changes that status. Microsoft wants to expand .NET to the Linux and MacOS platforms. They want to expand their community of developers. All reasonable goals; Microsoft clearly sees opportunities beyond the Windows platform.

What interests me is my reaction to the announcement. For me, opening the .NET framework and moving it to other platforms reduces the "specialness" of .NET. The Microsoft toolchain becomes just another toolchain. It is no longer the acknowledged leader for development on Windows.

The demotion of the Microsoft toolchain is accompanied by a promotion of the Java toolchain. Before, the Microsoft toolchain was the "proper" way to develop applications for Windows. Now, it is merely one way. Before, the Java toolchain was the "rebel" way to develop applications for Windows. Now, it is on par with the Microsoft toolchain.

I feel more comfortable developing a Java application to run on Windows. I also feel comfortable developing an application in .NET to run on Windows or Linux. (Yes, I know that the Linux version of .NET is not quite ready. But I'm comfortable with the idea.)

I think other folks will be comfortable with the idea. Comfortable enough to start experimenting with the .NET framework as people have experimented with the Java toolchain. Folks have created new languages to run under the JVM. (Clojure, Scala, and Groovy are popular ones, and there are lots of obscure ones.) I suspect that people avoided experimenting with the Microsoft toolchain because they feared changes or repercussions from Microsoft. Perhaps we will see experiments with the CLR and the .NET framework. (Perhaps new versions of the IronPython and IronRuby projects, too.)

By opening their toolchain, Microsoft has made it more accessible, technically and psychologically. They have reduced the barriers to innovation. I'm looking forward to the results.

Sunday, December 7, 2014

The convergence of web and mobile

It's easy to think of web applications and mobile applications as two distinct entities (or two distinct collections of entities). Web applications are hosted on web servers in data centers and they "run" in a browser. Mobile apps are installed on smartphones and tablets and run on those devices. Web applications are large, complex things and mobile apps are small and easy to use. Web applications are general, running on all major web browsers, and mobile apps are built for specific platforms (iOS, Android, etc.).

Yet the two are converging. Mobile apps are becoming more like web applications, and web applications are becoming more like mobile apps.

Let's start with mobile apps. The "standard model" for an app is a small, lightweight user interface that talks to services on back-end servers. In that sense, a mobile app is very much like a web application, which is a web page that sends requests to back-end servers.

The "standard model" for mobile apps is also that the app is installed once on your device and there it sits, ready for you to use. That's not really true, at least not with many of the apps I use.

The app does sit on my phone, but apps are upgraded frequently. The "good old days" of PC applications (or mainframe applications) saw upgrades to applications on an annual frequency, sometimes longer. Today's mobile apps are updates monthly, and sometimes more frequently. It seems that Twitter sends updates every week!

The upgrades are usually automatic and silent, requiring no intervention from the operator. One can tell when an app has been upgraded (sometimes), because when they display a small "what's new" dialog. A few apps are upgraded more often than I actually use them. I know this because I always see the "what's new" dialog when I run the app.

With upgrade frequencies weekly and approaching daily, perhaps it doesn't make sense to install the app on the phone. That is, perhaps it makes more sense to always download the app from the server. In this way, a mobile app is becoming more like a web page, which is always downloaded from the server. (I'm ignoring cached copies.)

Looking at web applications, we can see changes that make them more like mobile apps. The "standard model" for a web application is that it lives in the browser and has access to only the back-end servers, with no abilities to manipulate data or devices on the host computer. Yet this is no longer true: web apps can upload files, send e-mail (or tie in to local e-mail clients), and now manipulate the local camera and attached devices.

Web applications are becoming more like mobile apps. Mobile apps are becoming more like web applications. Perhaps we will meet in a convenient, happy middle ground that lets us get the best of both worlds.

Wednesday, December 3, 2014

Maintenance may be more than we think

The effort for systems is often split into "development" and "maintenance". Development is considered the premium of the two: creating a new system to address a business need; maintenance is often considered the "dirty work" of minor adjustments and corrections.

I've been thinking about the terms "development" and "maintenance", and I think our ideas need to change.

Let's consider a non-software example. Let's suppose that we build a hotel in a remote tropical location, on beachfront property. It's a simple hotel with decent rooms and a basic restaurant, but nothing special. With our current understanding of "development" and "maintenance", the construction of such a hotel is clearly "development".

Our hotel is isolated, and far away from other hotels and restaurants.

There are clearly maintenance tasks: cleaning the rooms and hallways, stocking the refrigerators (perhaps one considers that task part of "operations"), and repairing the corrosion from the salty air.

A major change, such as adding air conditioning to the guest rooms or a swimming pool in the common area, is considered "development".

But suppose that we don't add air conditioning or a swimming pool. Suppose we keep our simple hotel, but others buy property nearby and build hotels of their own. (Other people recognize the business opportunity we are enjoying.) And suppose further that they build hotels with air conditioning and swimming pools. We lose some business to the newer, fancier hotels in the area.

Now we're in a different situation. Faced with competition, we're forced to take action. Adding a swimming pool or air conditioning to our hotel is not development, at least not in the sense of defining a new service or market. Enhancing our resort to match the competition is less a matter of expansion and more a matter of stemming the loss of business. In this case, adding a swimming pool is "maintenance".

Development gets you into the market, and maintenance keeps you in the market.

Taking this view of development and maintenance for programs, maintenance becomes a task much larger than simply fixing defects and making minor changes. Maintenance is the act of staying competitive. That includes corrections of defects, and it includes major features to match the competition. It includes adjustments to match changes to the operating system, such as the GUI changes for Windows 8.

Which means that maintenance more than our traditional idea. In one sense it is the old idea of keeping something running. The old sense was focussed on the the thing; the new sense looks at the thing (the program, the hotel, etc.) and the environment. We have to keep the thing running in a changing environment.

One can argue about enhancements to a computer system (or a hotel) and the classification into the categories of "development" and "maintenance". There may be tax ramifications for that classification -- especially for "maintenance contracts". I'm ignoring the tax issues here.

Development and maintenance are perhaps not as distinct as we like to think. If we consider the environment, then enhancements to a system may be either development or maintenance, depending on the surrounding systems.

Thursday, November 13, 2014

Cloud and agile change the rules

The history of computer programming is full of attempts to ensure success, or more specifically, to avoid failure. The waterfall method of separating analysis, design, and coding (with reviews after each step) is one such technique. Change reviews are another. System testing is another. Configuration management (especially for production systems) is another.

It strikes me that cloud computing and agile development techniques are yet more methods in our quest to avoid failure. But they change the rules from previous efforts.

Cloud computing tolerates failures of equipment. Agile development guards against failures in programming.

Cloud computing uses multiple instances of servers. It also uses stateless transactions, so any server can handle any request. (Well, any web server can handle any web request, and any database server can handle any database request.) If a server fails, another server (of the same type) can pick up the work.

Cloud computing cannot, however, handle a failure in code. If I write a request handler and get the logic wrong, then each instance of the handler will fail.

Agile development handles the code failures. Agile development ensures that the code is always correct. With automated tests and small changes, the code can grow and programmers (and managers) can know that the added features are correct.

These two techniques (cloud and agile) let us examine some of the strategies we have used to ensure success.

For hardware, we had long product life cycles. We selected products that were known to be reliable. For mainframes and early PCs, this was IBM. (For later PCs it was Compaq.) The premium brands commanded premium prices, because we valued the reliability of the equipment and the vendor. And since the equipment was expensive, we planned to use it for a long time.

For software, we practiced "defensive coding" and had each function check its inputs for invalid values or combinations of values. We held code reviews. We made the smallest changes possible, to reduce risk. We avoided large changes that would improve the readability of the code because we could not be sure that the revised code would work as expected in all cases.

In light of cloud computing's cheap hardware and agile development's pair programming and automated testing, these strategies may no longer be the best practice. Our servers are virtual, and while we want the underlying "big iron" to be reliable and long-lived, the servers themselves may have short lives. If that is the case, the "standard" server configuration may change over time, more frequently than we changed our classic, non-virtual servers.

The automated testing of agile development changes our approach to program development. Before comprehensive automated testing, minimal changes were prudent, as we could not know that a change would have an unintended effect. A full set of automated tests provides complete coverage of a program's functionality, so we can be bolder in our changes to a program. Re-factoring a small section of code (or a large section) is possible; our tests will verify that we have introduced no defects.

Cloud computing and agile development change the rules. Be aware of the changes and change your procedures to keep up.

Thursday, November 6, 2014

A New Microsoft And A New Tech World

Things are just not what they used to be.

In the good old days, Microsoft defined technology for business and set the pace for change. They had built an empire on Windows and products that worked with Windows.

Not only did Microsoft build products, they built their own versions of things to work in their world. Microsoft adopted the attitude of "not invented here": they eschewed popular products and built their own versions.

They built their own operating system (DOS at first, then Windows). They built their own word processor, their own spreadsheet (two actually: Multiplan was their first attempt), their own database manager, their own presentation software. They built their own browser. They even constructed their own version of a "ZIP" file: OLE Structured Storage.

All of these technologies had one thing in common: they worked within the Microsoft world. Microsoft Office ran on Windows - and nothing else. Internet Explorer worked on Windows - and nothing else. Visual Studio ran on Windows - and... you get the idea. Microsoft technology worked with Microsoft technology and nothing else.

For two decades this strategy worked. And then the world changed.

Microsoft has shifted away from the "all things Microsoft" approach. Consider:

  • Microsoft Word uses an open (well, open-ish) format of ZIP and XML
  • So does Microsoft Excel
  • Visual Studio supports projects that use JavaScript, HTML, and CSS
  • Microsoft Azure supports Linux, PHP, Python, and node.js
  • Office 365 apps are available for Android and iOS

These are significant changes. Microsoft is no longer the self-centered (one might say solipsistic) entity that it once was.

We must give up our old prejudices. The idea that Microsoft technology is always good ("No one was fired for buying Microsoft") is not true. It and never was. The weak reception of the Surface tablet and Windows phones shows that. (The anemic reception of Windows RT also shows that.)

We must also give up the notion that all Microsoft technology is large, expensive, bug-ridden, and difficult to maintain. It may be fun to hate on Microsoft, but it is not practical. Microsoft Azure is a capable set of tools. Their 'Express' products may be limited in functionality but they do work, and without much effort or expense.

The bigger change is the shift away from monoculture technology. We're entering an age of diverse technology. Instead of servers running Microsoft Windows and Microsoft databases and Microsoft applications with clients running Microsoft Windows and Microsoft browsers using Microsoft authentication, we have Microsoft applications running in Amazon.com's cloud with users holding Android tablets and Apple iPads.

Microsoft is setting a new standard for IT: multiple vendors, multiple technologies, and interoperability. What remains to be seen is how other vendors will follow.

Tuesday, October 21, 2014

Cloud systems are the new mainframe

The history of computers can be divided (somewhat arbitrarily) into six periods. These are:

Mainframe
Timeshare (on mainframes)
Minicomputers
Desktop computers (includes pre-PC microcomputers, workstations, and laptops)
Servers and networked desktops
Mobile devices (phones and tablets)

I was going to add 'cloud systems' to the list as a seventh period, but I got to thinking.

My six arbitrary periods of computing show definite trends. The first trend is size: computers became physically smaller in each successive period. Mainframe computers were (and are) large systems that occupy rooms. Minicomputers were the sizes of refrigerators. Desktop computers fit on (or under) a desk. Mobile devices are small enough to carry in a shirt pocket.

The next trend is cost. Each successive period has a lower cost than the previous one. Mainframes cost in the hundreds of thousands of dollars. Minicomputers in the tens of thousands. Desktop computers were typically under $3000 (although some did edge up near $10,000) and today are usually under $1000. Mobile device costs range from $50 to $500.

The third trend is administrative effort or "load". Mainframes needed a team of well-trained attendants. Minicomputers needed one knowledgeable person to act as "system operator" or "sysop". Desktop computers could be administered by a geeky person in the home, or for large offices a team of support persons (but less than one support person per PC). Mobile devices need... no one. (Well, technically they are administered by the tribal chieftains: Apple, Google, or Microsoft.)

Cloud systems defy these trends.

By "cloud systems", I mean the cloud services that are offered by Amazon.com, Microsoft, Google, and others. I am including all of the services: infrastructure as a service, platform as a service, software as a service, machine images, queue systems, compute engines, storage engines, web servers... the whole kaboodle.

Cloud systems are large and expensive. They also tend to be limited in number, perhaps because they are large and expensive. They also have a sizable team of attendants. Cloud systems are complex and a large team is needed to keep everything running.

Cloud systems are much like mainframe computers.

The cloud services that are offered by vendors are much like the timesharing services offered by mainframe owners. With timesharing, customers could buy just as much computing time as they needed. Sound familiar? It's the model used by cloud computing.

We have, with cloud computing, returned to the mainframe era. This period has many similarities with the mainframe period. Mainframes were large, expensive to own, complex, and expensive to operate. Cloud systems are the same. The early mainframe period saw a number of competitors: IBM, NCR, CDC, Burroughs, Honeywell, and Univac, to name a few. Today we see competition between Amazon.com, Microsoft, Google, and others (including IBM).

Perhaps my "periods of computing history" is not so much a linear list as a cycle. Perhaps we are about to go "around" again, starting with the mainframe (or cloud) stage of expensive systems and evolve forward. What can we expect?

The mainframe period can be divided into two subperiods: before the System/360 and after. Before the IBM System/360, there was competition between companies and different designs. After the IBM System/360, companies standardized on that architecture. The System/360 design is still visible in mainframes of today.

An equivalent action in cloud systems would be the standardization of a cloud architecture. Perhaps the Open Stack software, perhaps Microsoft's Azure. I do not know which it will be. The key is for companies to standardize on one architecture. If it is a proprietary architecture, then that architecture's vendor is elevated to the role of industry leader, as IBM was with the System/360 (and later System/370) mainframes.

While companies are busy modifying their systems to conform to the industry standard platform, innovators develop technologies that allow for smaller versions. In the 1960s and 1970s, vendors introduced minicomputers. These were smaller than mainframes, less expensive, and easier to operate. For cloud systems, the equivalent would be... smaller than mainframe clouds, less expensive, and easier to operate. They would be less sophisticated than mainframe clouds, but "mini clouds" would still be useful.

In the late 1970s, technology advances lead to the microcomputer which could be purchased and used by a single person. As with mainframe computers, there were a variety of competing standards. After IBM introduced the Personal Computer, businesses (and individuals) elevated it to industry standard. Equivalent events in cloud would mean the development of individual-sized cloud systems, small enough to be purchased by a single person.

The 1980s saw the rise of desktop computers. The 1990s saw the rise of networked computers, desktop and server. An equivalent for cloud would be connecting cloud systems to one another. Somehow I think this "inter-cloud connection" will occur earlier, perhaps in the "mini cloud" period. We already have the network hardware and protocols in place. Connecting cloud systems will probably require some high-level protocols, and maybe faster connections, but the work should be minimal.

I'm still thinking of adding "cloud systems" to my list of computing periods. But I'm pretty sure that it won't be the last entry.

Monday, October 6, 2014

Innovation in mobile and cloud; not in PCs

The history of IT is the history of innovation. But innovation is not evenly distributed, and it does not stay with one technology.

For a long time, innovation focussed on the PC. The "center of gravity" for innovation was, for a long time, the IBM PC and PC-DOS. Later it became the PC (not necessarily from IBM) and Windows. Windows NT, Windows 2000, and Windows XP all saw significant expansions of features.

With the rise of the Web, the center of gravity shifted to web servers and web browsers. I think that it is no coincidence that Microsoft offered Windows XP with no significant changes. People accepted Windows XP as "good enough" and looked for innovation in other areas -- web browsers, web servers, and databases.

This change broke Microsoft's business model. That business model (selling new versions of Windows and Office to individuals and corporations every so often) was broken when users decided that Windows XP was good enough, that Microsoft Office was good enough. They moved to newer versions reluctantly, not expectantly.

Microsoft is changing its business model. It is shifting to a subscription model for Windows and Office. It has Azure for cloud services. It developed the Surface tablet for mobile computing. Microsoft's Windows RT was an attempt at an operating system for mobile devices, an operating system that had reduced administrative tasks for the user. These are the areas of innovation.

We have stopped wanting new desktop software. I know of no new projects that target the desktop. I know of no new projects that are "Windows only" or "PC only". New projects are designed for mobile/cloud, or possibly web browsers and servers. With no demand for new applications on the desktop, there is no pressure to improve the desktop PC - or its operating system.

With no pressure to improve the desktop, there is no need to change the hardware or operating system. We see changes in three areas: larger memory and disks (mostly from inertia), smaller form factors, and prettier user interfaces (Windows Vista and Windows 8 "Metro"). With each of these changes, users can (rightfully) ask: what is the benefit to me?

It is a question that newer PCs and operating systems have not answered. But tablets and smartphones answer it quite well.

I think that Windows 10 is the "last hurrah" for Windows -- at least the desktop version. Innovations to Windows will be modifications for mobile/cloud technologies: better interactions with virtualization hypervisors and container managers. Aside from those, look for little changes in desktop operating systems.

Thursday, September 25, 2014

Tablets are controlled by corporations

I admit I was wrong. In my previous post, I claimed that mobile devices would be free of corporate bureaucracy (and control). That's not true.

It's true in the sense that when the Acme corporation buys PCs it can control them with ActiveDirectory and group policies, and that similar infrastructure is not in place for tablets and smartphones. (I'm ignoring the third-party Mobile Device Management software.)

But it's false in the sense that corporations do control the mobile devices. The corporations are not Acme or whoever buys the devices. The controlling corporations are the owners of the walled gardens: Apple, Google, Amazon.com, and Microsoft. These corporations control the software available and the updates that occur automatically. (Yes, you can turn some updates off, but only while those corporations let you.)

The control that these companies exert is indisputable. Apple just recently placed a copy of a U2 album on every iPod and iPhone. Some time ago, Amazon.com deleted books from various Kindle e-readers. These companies are the "tribal chieftains", with immense power over the devices.

Android and iOS are popular in part because they are easy to use. That ease of use comes from the absence of administration tasks. The administration has not disappeared, it has moved from the "owner" of the device to the controlling company. Apple builds the updates for iOS and distributes those updates (along with updates to apps) to iPhones and iPads. Google does the same for Android devices. Microsoft does the same for "Metro" apps.

It may be this control that makes corporations reluctant to use tablets. They may know, deep down, that they are not in control of the devices. They may realize that at any moment the tribal chieftains may change the software, or worse, read or modify (or possibly delete) data on the devices. They may grant other individuals access to mobile devices.

All of this does not mean that corporations (the Acme variety, who are using the devices) should avoid mobile devices. It *does* mean that corporations should use them intelligently. They should not manage tablets and smartphones in the same way that they manage PCs, and they should not use tablets and smartphones in the same way as they use PCs. The model for mobile devices is very different from PCs.

Business can use tablets and smartphones, but differently than PCs. Data should be handled by specific apps, not generic applications like Microsoft Word and Excel. Mobile apps should authenticate users, retrieve a limited set of data from servers, present that data, manipulate that data, and then store the data on the server. Apps should not store data on the local device. (This is also good for the scenario of a lost device -- if it has no data, there can be no data "leakage" to unauthorized parties.)

Mobile devices are controlled by the tribal chieftains. Yet they can still be used by corporations -- and individuals.

Wednesday, September 24, 2014

Mobile devices may always be independent from corporate bureaucracy

The mobile revolution is different from the PC revolution.

The PC revolution saw the IBM PC adopted as the standard for personal computing. It was adopted by businesses and consumers, but most spending was from businesses.

The mobile revolution, in contrast, is driven by consumers. Individuals are buying smart phones and tablets. Businesses may be purchasing some mobile devices, but the bulk of the spending is on the consumer side.

Why is this distinction important?

To answer that, let's look at PCs and their history. Personal computers in corporations are anything but personal. They are purchased by the corporation and controlled by the corporation. The people using PCs rarely have administrator privileges for those PCs. Instead, the ability to install software and make significant changes is governed by the local copy of Windows and configurations in a central ActiveDirectory server.

The infrastructure of ActiveDirectory and Windows group policies was not built overnight, and was not part of the original PC. The first PCs ran PC-DOS and had no administrative controls at all -- any user could do anything, see anything, and change anything. Microsoft worked on PC-DOS for IBM, then MS-DOS for non-IBM computers, then Windows, and finally server software and ActiveDirectory. It took about twenty years to create, from the introduction of the IBM PC in 1981 to the introduction of ActiveDirectory in 1999.

That work was done by Microsoft because corporations wanted it. They wanted mechanisms to control the PCs and the access to data on PCs and servers. (And even with all of that interest, it took two decades to "enterprise-ify" PCs and make them part of the bureaucracy.)

Corporations were interested in PCs from the introduction of the IBM PC. (Some corporations were interested in earlier microcomputers, but they were a minority.) Corporations were interested in PCs because PCs ran Lotus 1-2-3, the popular spreadsheet at the time.

Now let's look at mobile devices. Corporations have a mild interest in mobile devices. It is only a fraction of the interest in PCs. There is no killer app for tablets, no must-have app for smart phones. (At least, not for corporations.) It is quite possible that phones and tablets are too personal for corporations.

It is telling that the Microsoft Surface tablet, with its ready-to-use connections to ActiveDirectory, has seen little interest. For consumers, the Surface (and other Windows tablets) are more expensive and not as useful as the iPad and Android tablets. But even corporations have little interest in the Microsoft offerings.

Without corporate interest (and corporate spending), neither Apple nor Google have incentive to make their tablets "safe for the enterprise" -- that is, controlled through a central administration point. (Yes, there are "mobile device management" packages, but they have little interest.)

Apple and Google will invest their efforts in other areas, such as better hardware and improved reliability of apps in their stores (and maybe higher profits).

Corporations will use tablets for small, isolated projects, if at all. I suspect most corporations view their proven and familiar desktops and laptops as sufficient, with little benefit from tablets.

But all is not lost for tablets and smart phones. Some folks will use them for critical business purposes. These folks will not be the large corporations with established IT infrastructure. They will be the start-ups, the small companies who will build completely new apps to solve completely new business problems.

Sunday, September 21, 2014

Keeping our keyboards

Tablets are quite different from desktop PCs and laptop PCs. (Obviously.)

PCs have large displays, keyboards, mice, and wired network connections. They often have CD or DVD drives. Tablets, in contrast, have small displays, virtual keyboards, a touch screen (so no mice), no wired network connection, and media storage (if any) is limited to memory cards.

So we can view the transition from PC to tablet as a shift in peripherals. The "old school" PC used physical keyboards, mice, and disks; the "new school" tablets use touch screens, virtual keyboards, and no mice tor disks.

Except for one small detail.

Tablet users are using keyboards.

Not mice.

Not printers.

Keyboards.

I do understand that some people are using tablets with mice and printers. A small minority of people, but nowhere near the sizable number of people are using physical keyboards.

The appeal of keyboards is such that people continue to use them as an input device. They carry a keyboard with their tablet. They buy tablet covers that have built-in keyboards.

I think this tells us something about keyboards.

Wednesday, September 10, 2014

The Apple Watch

Lots of folks have commented on the newly-announced Apple Watch. Many have praised the features. Some have criticized the design. Others have questioned the battery life.

Here's my take: the Apple Watch is not a watch. Yes, it will tell you the time, but it is more than that. The Apple Watch is it a "smartwatch". Yes it can present the time in lots of nifty formats (digital, retro analog, black-and-white, color) and it connects with Siri.

The Apple Watch is single product in a line of products that are designed to make things easy for people. Let's look at previous Apple products:

The iPod was more than a simple MP3 player. It was a system that made it easy to purchase and play music. There were plenty of MP3 players that only played music and left the acquisition of music to the user. The combination of iPod and iTunes (and the impressive collection available on iTunes) was the genius of the iPod.

The iPhone was more than a smart cell phone. It expanded iTunes to allow for the easy purchase and installation of apps. The ease of installation is often overlooked when it comes to the features of the iPhone. Remember, prior to the iPhone the standard model for installing applications was Microsoft's "Setup" or "MSI" packages, which often required special privileges and technical knowledge.

The iPad expanded the possibilities for apps. Apps for the iPhone were designed for the small screen. One could play music on an iPhone, but reading a book is much better on an iPad. One can use Twitter and Facebook on an iPhone, but documents and spreadsheets are much better on an iPad.

The trend has been for larger devices. The Apple Watch moves in the opposite direction, providing a smaller screen. The obvious conclusion is that one will not be using the Apple Watch for spreadsheets and documents. (Twitter may be okay, but I suspect that Facebook is not useful on the Apple Watch.) The less obvious conclusion is that the Apple Watch will be used for something else.

The question is: what will the Apple Watch make simpler for us, in such a way that Apple can profit from it?

A watch is more personal than a phone, more intimate, as it is physical contact with us. Sensors in the watch could be used for biometric information and possibly health information.

Apple has already announced plans for payment systems. We tend to keep the smaller devices with us more, carrying phones more frequently than tablets, so we will probably carry a watch with us more than a phone. Matching payments to a watch is good sense.

Initial enthusiasm for the Apple Watch may see a lot of people porting their apps to the phone. Some of that enthusiasm will be misplaced; the Watch will be good for some things but not everything from the iPhone -- and especially not everything from the iPad. There may be some new apps that are suitable to the Watch and not the phone or tablet -- probably games.

I think the Apple Watch has potential. I also think that it will be its own thing, not merely a small iPhone.

Friday, August 29, 2014

Virtual PCs are different from real PCs

Virtual PCs started as an elaborate game of "let's pretend", in which we simulated a real PC (that is, a physical-hardware PC) in software. A virtual PC doesn't exist -- at least not in any tangible form. It has a processor and memory and disk drives and all the things we normally associate with a PC, but they are all constructed in software. The processor is emulated in software. The memory is emulated in software. The disk drive... you get the idea.

Virtualization offers several advantages. We can create new virtual PCs by simply running another copy of the virtualization software. We can move virtual PCs from one host PC to another host PC. We can make back-up images of virtual PCs by simply copying the files that define the virtual PC. We can take snapshots of the virtual PC at a moment in time, and restore those snapshots at our convenience, which lets us run risky experiments that would "brick" a real PC.

We like to think that virtual PCs are the same as physical PCs, only implemented purely in software. But that is not the case. Virtual PCs are a different breed. I can see three areas that virtual PCs will vary from real PCs.

The first is storage (disk drives) and the file system. Disk drives hold our data; file systems organize that data and let us access it. In real PCs, a disk drive is a fixed size. This makes sense, because a physical disk drive *is* a fixed size. In the virtual world, a disk drive can grow or shrink as needed. I expect that virtual PCs will soon have these flexible disk drives. File systems will have to change; they are built with the assumption of a fixed-size disk. (A reasonable assumption, given that they have been dealing with physical, fixed-size disks.) Linux will probably get a file system called "flexvfs" or something.

The second area that virtual PCs vary from real PCs is virtual memory. The concept of virtual memory is older than virtual PCs or even virtual machines in general (virtual machines date back to the mainframe era). Virtual memory allows a PC to use more memory than it really has, by swapping portions of memory to disk. Virtual PCs currently implement virtual memory because they are faithfully duplicating the behavior of real PCs, but they don't have to. A virtual PC can assume that it has all memory addressable by the processor and let the hypervisor handle the virtualization of memory. Delegating the virtualization of memory to the hypervisor lets the "guest" operating system become simpler, as it does not have to worry about virtual memory.

A final difference between virtual PCs and real PCs is the processor. In a physical PC, the processor is rarely upgraded. An upgrade is an expensive proposition: one must buy a compatible processor, shut down the PC, open the case, remove the old processor, carefully install the new processor, close the case, and start the PC. In a virtual PC, the processor is emulated in software, so an upgrade is nothing more that a new set of definition files. It may be possible to upgrade a processor "on the fly" as the virtual PC is running.

These three differences (flexible file systems, lack of virtual memory, and updateable processors) show that virtual PCs are not the same as the "real" physical-hardware PCs. I expect that the two will diverge over time, and that operating systems for the two will also diverge.

Tuesday, August 26, 2014

With no clear IT leader, expect lots of changes

The introduction of the IBM PC was market-wrenching. Overnight, the small, rough-and-tumble market of microcomputers with diverse designs from various small vendors became large and centered around the PC standard.

From 1981 to 1987, IBM was the technology leader. IBM lead in sales and also defined the computing platform.

IBM's leadership fell to Compaq in 1987, when IBM introduced the PS/2 line with its new (incompatible) hardware. Compaq delivered old-style PCs with a faster buss (the EISA buss) and notably the Intel 80386 processor. (IBM stayed with the older 80286 and 8086 processors, eventually consenting to provide 80386-based PS/2 units.) Compaq even worked with Microsoft to deliver newer versions of MS-DOS that recognized larger memory capacity and optical disc readers.

But Compaq did not remain the leader. It's leadership declined gradually, to the clone makers and especially Dell, HP, and Gateway.

The mantle of leadership moved from a PC manufacturer to the Microsoft-Intel duopoly. The popularity of Windows, along with marketing skill and software development prowess led to a stable configuration for Microsoft and Intel. Together, they out-competed IBM's OS/2, Motorola's 68000 processor, DEC's Alpha processor, and Apple's Macintosh line.

That configuration held for two decades, roughly from 1990 to 2010, when Apple introduced the iPhone. The genius move was not the iPhone hardware, but the App Store and iTunes, which let one easily find and install apps on your phone (and pay for them).

Now Microsoft and Apple have the same problem: after years of competing in a well-defined market (the corporate PC market) they struggle to move into the world of mobile computing. Microsoft's attempts at mobile devices (Zune, Kin, Surface RT) have flopped. Intel is desperately attempting to design and build processors that are suitable for low-power devices.

I don't expect either Microsoft or Intel to disappear. (At least not for several years, possibly decades.) The PC market is strong, and Intel can sell a lot of its traditional (heat radiator that happen to compute data) processors. Microsoft is a competent player in the cloud arena with its Azure services.

But I will make an observation: for the first time in the PC era, we find that there is no clear leader for technology. The last time we were leaderless was prior to the IBM PC, in the "microcomputer era" of Radio Shack TRS-80 and Apple II computers. Back then, the market was fractured and tribal. Hardware ruled, and your choice of hardware defined your tribe. Apple owners were in the Apple tribe, using Apple-specific software and exchanging data on Apple-specific floppy disks. Radio Shack owners were in the Radio Shack tribe, using software specific to the TRS-80 computers and exchanging data on TRS-80 diskettes. Exchanging data between tribes was one of the advanced arts, and changing tribes was extremely difficult.

There were some efforts to unify computing: CP/M was the most significant. Built by Digital Research (a software company with no interest in hardware), CP/M ran on many different configurations. Yet even that effort could not span the differences in processors, memory layout, and video configurations.

Today we see tribes forming around multiple architectures. For cloud computing, we have Amazon.com's AWS, Microsoft's Azure, Google's App Engine. With virtualization we see VMware, Oracle's VirtualBox, the aforementioned cloud providers, and newcomer Docker as a rough analog of CP/M. Mobile computing sees Apple's iOS, Google's Android, and Microsoft's Windows RT as a (very) distant third.

With no clear leader and no clear standard, I expect each vendor to enhance their offerings and also attempt to lock in customers with proprietary features. In the mobile space, Apple's Swift and Microsoft's C# are both proprietary languages. Google's choice of Java puts them (possibly) at odds with Oracle -- although Oracle seems to be focussed on databases, servers, and cloud offerings, so there is no direct conflict. Things are a bit more collegial in the cloud space, with vendors supporting OpenStack and Docker. But I still expect proprietary enhancements, perhaps in the form of add-ons.

All of this means that the technology world is headed for change. Not just change from desktop PC to mobile/cloud, but changes in mobile/cloud. The competition from vendors will lead to enhancements and changes, possibly significant changes, in cloud computing and mobile platforms. The mobile/cloud platform will be a moving target, with revisions as each vendor attempts to out-do the others.

Those changes mean risk. As platforms change, applications and systems may break or fail in unexpected ways. New features may offer better ways of addressing problems and the temptation to use those new features will be great. Yet re-designing a system to take advantage of new infrastructure features may mean that other work -- such as new business features -- waits for resources.

One cannot ignore mobile/cloud computing. (Well, I suppose one can, but that is probably foolish.) But one cannot, with today's market, depend on a stable platform with slow, predictable changes like we had with Microsoft Windows.

With such an environment, what should one do?

My recommendations:

Build systems of small components  This is the Unix mindset, with small tools to perform specific tasks. Avoid large, monolithic systems.

Use standard interfaces  Use web services (either SOAP or REST) to connect components into larger systems. Use JSON and Unicode to exchange data, not proprietary formats.

Hedge your bets  Gain experience in at least two cloud platforms and two mobile platforms. Resist the temptation of "corporate standards". Standards are good with a predictable technology base. The current base is not predictable, and placing your eggs in one vendor's basket is risky.

Change your position  After a period of use, examine your systems, your tools, and your talent. Change vendors -- not for everything, but for small components. (You did build your system from small, connected components, right?) Migrate some components to another vendor; learn the process and the difficulties. You'll want to know them when you are forced to move to a different vendor.

Many folks involved in IT have been living in the "golden age" of a stable PC platform. They may have weathered the change from desktop to web -- which saw a brief period of uncertainty. More than likely, they think that the stable world is the norm. All that is fine -- except we're not in the normal world with mobile/cloud. Be prepared for change.

Sunday, August 17, 2014

Reducing the cost of programming

Different programming languages have different capabilities. And not surprisingly, different programming languages have different costs. Over the years, we have found ways of reducing those costs.

Costs include infrastructure (disk space for compiler, memory) and programmer training (how to write programs, how to compile, how to debug). Notice that the load on the programmer can be divided into three: infrastructure (editor, compiler), housekeeping (declarations, memory allocation), and business logic (the code that gets stuff done).

Symbolic assembly code was better than machine code. In machine code, every instruction and memory location must be laid out by the programmer. With a symbolic assembler, the computer did that work.

COBOL and FORTRAN reduced cost by letting the programmer not worry about the machine architecture, register assignment, and call stack management.

BASIC (and time-sharing) made editing easy, eliminated compiling, and made running a program easy. Results were available immediately.

Today we are awash in programming languages. The big ones today (C, Java, Objective C, C++, BASIC, Python, PHP, Perl, and JavaScript -- according to Tiobe) are all good at different things. That is perhaps not a coincidence. People pick the language best suited to the task at hand.

Still, it would be nice to calculate the cost of the different languages. Or if numeric metrics are not possible, at least rank the languages. Yet even that is difficult.

One can easily state that C++ is more complex than C, and therefore conclude that programming in C++ is more expensive that C. Yet that's not quite true. Small programs in C are easier to write than equivalent programs in C++. Large programs are easier to write in C++, since the ability to encapsulate data and group functions into classes helps one organize the code. (Where 'small' and 'large' are left to the reader to define.)

Some languages are compiled and some that are interpreted, and one can argue that a separate step to compile is an expense. (It certainly seems like an expense when I am waiting for the compiler to finish.) Yet languages with compilers (C, C++, Java, C#, Objective-C) all have static typing, which means that the editor built into an IDE can provide information about variables and functions. When editing a program written in one of the interpreted languages, on the other hand, one does not have that help from the editor. The interpreted languages (Perl, Python, PHP, and JavaScript) have dynamic typing, which means that the type of a variable (or function) is not constant but can change as the program runs.

Switching from an "expensive" programming language (let's say C++) to a "reduced cost" programming language (perhaps Python) is not always possible. Programs written in C++ perform better. (On one project, the C++ program ran for several hours; the equivalent program in Perl ran for several days.) C and C++ let one have access to the underlying hardware, something that is not possible in Java or C# (at least not without some add-in trickery, usually involving... C++.)

The line between "cost of programming" and "best language" quickly blurs, and nailing down the costs for the different dimensions of programming (program design, speed of coding, speed of execution, ability to control hardware) get in our way.

In the end, I find that it is easy to rank languages in the order of my preference rather than in an unbiased scheme. And even my preferences are subject to change, given the nature of the project. (Is there existing code? What are other team members using? What performance constraints must we meet?)

Reducing the cost of programming is really about trade-offs. What capabilities do we desire, and what capabilities are we willing to cede? To switch from C++ to C# may mean faster development but slower performance. To switch from PHP to Java may mean better organization of code through classes but slower development. What is it that we really want?

Monday, August 11, 2014

Agile is not compatible with silos

Agile development methods are very different from the traditional Waterfall methods. So different that they can affect the culture of the organization.

Agile make a different promise than Waterfall. Waterfall promises a specific deliverable on a specific date; Agile promises that you can ship whenever you want.

Agile discourages specialization. An iteration is short yet requires analysis, development, and testing. Such a short cycle does not allow for different individuals to perform different tasks.

Yet the biggest difference between Agile and Waterfall is the partitioning of tasks and the encapsulation of information. Waterfall strives for clean, discrete changes from one phase to another, with information flowing between phases in well-defined documents. The flow between the requirements phase and the development phase is the requirements document (or documents). The test results are presented in a specific document. And so on.

Information in each phase is encapsulated in that phase, and only a small set of information is allowed to transfer (one might say 'leak') to another phase.

The partitioning of tasks and the encapsulation of information leads to silos within the organization. Once separate teams are established for requirements, development, testing, and deployment, tensions arise between teams. The testing team identifies defects that reflect on the development team. The development team blames the requirements team for incomplete or ambiguous specifications.

Agile -- at least Agile for small teams -- has none of that. The fast cycles of feature selection, design, development, and test provide immediate feedback. An ambiguous requirement is spotted early, and it is obvious to everyone. Defects are identified and fixed before implementing the next feature set.

More importantly, an Agile project has one team, and the measurement of success for that team is the delivery of software. That focus on success and the inability to shift blame to another team means that it is harder to establish silos.

Which is not to say that Agile will eliminate all silos. An organization with many Agile projects can still have silos. A large company using an "Agile for large companies" process may develop silos.

But for the most part, I believe Agile processes are incompatible with silos. The involvement of necessary stakeholders; the coordinated work of design, development, and testing; and the fast cycle times all push against silo-ization.

Thursday, July 31, 2014

Not so special

The history of computers has been the history of things becoming not special.

First were the mainframes. Large, expensive computers ordered, constructed, delivered, and used as a single entity. Only governments and wealthy corporations could own (or lease) a computer. Once acquired, the device was a singleton: it was "the computer". It was special.

Minicomputers reduced the specialness of computers. Instead of a single computer, a company (or a university) could purchase several minicomputers. Computers were no longer single entities in the organization. Instead of "the computer" we had "the computer for accounting" or "the computer for the physics department".

The opposite of "special" is "commodity", and personal computers brought us into a world of commodity computers. A company could have hundreds (or thousands) of computers, all identical.

Yet some computers retained their specialness. E-mail servers were singletons -- and therefore special. Web servers were special. Database servers were special.

Cloud computing reduces specialness again. With cloud systems, we can create virtual systems on demand, from pre-stocked images. We can store an image of a web server and when needed, instantiate a copy and start using it. We have not a single web server but as many as we need. The same holds for database servers. (Of course, cloud systems are designed to use multiple web servers and multiple database servers.)

In the end, specialness goes away. Computers, all computers, become commodities. They are not special.

Monday, July 28, 2014

Improving code can cause an explosion of classes

Object-oriented programming took the world by storm in the 1990s. Those early days saw a lot of programmers learning new skills.

It took some time to truly learn object-oriented programming. The jump from structured programming (or procedural code) to object-oriented programming was not small. (And is still not small.)

Many early attempts at object-oriented programming were inelegant if not amateurish. They contained mistakes, but the errors are only visible in hindsight. Programmers who are inexperienced in a new technique make mistakes. (I'm one of them.)

Common problems were:

  • large classes with many purposes
  • long functions (procedural code wrapped in object clothing)
  • excessive inheritance
  • too little inheritance
  • weak encapsulation (little or no use of access control)
  • little or no composition

Legacy systems often contain these problems. More than three decades after their inception, systems contain original design flaws. The problems remain because they are difficult to correct and the return on the investment is unclear. I often argue that a better design reduces maintenance costs in the future, and the counter-argument is that the current development team knows the code and would gain little from an improved design.

When I can convince the system owners of the benefits of improved code (and I am becoming more convincing over time), we see a remarkable transformation in the code.

The most obvious change is in the number of classes. The revised system contains many more classes, often several times the original number. Yet while the number of classes increases, the total number of lines of code decreases. The construction of new classes allows for the consolidation of duplicate code, something that occurs often in legacy systems.

The new classes are usually small. Instead of the large, multipurpose classes of the earlier design, I move functions to small, single-purpose classes. Some classes are mere data containers, others hold one or two elements and provide a small number of functions on those elements. While small, these classes have a big effect on the readability of the code: they eliminate low-level operations from high-level and mid-level code, allowing the reader to focus on the higher level operations.

Small classes are much easier to test, and much easier to test with automated tools. Even C++ can use automated tests to verify the operation of classes. Automated tests relieve a burden from developers (and testers or "QA" folk) and allow them to direct their efforts to building and maintaining meaningful tests.

A large number of small classes provides an additional benefit: the ability to group classes into libraries. Large (or large-ish) early object-oriented systems tend to group all of the classes into a single package, usually called "the application". With a large number of classes, the system maintainers see groups of classes emerge (perhaps all of the database classes, or all of the elementary data classes). These groupings can be formalized with libraries. For very large projects, these libraries can be maintained by different teams. Libraries can also be shared across multiple projects, reducing the duplication of effort at a larger scale.

Modernizing legacy systems can lead to an "explosion of classes", and this can be a good thing. Smaller classes are easier to understand and maintain. They can be tested independently. They can be grouped into libraries. Do not fear such an increase in the number of classes in your code.

Wednesday, July 23, 2014

Waterfall caused specialization; agile causes generalization

There are a number of differences between waterfall processes and agile processes. Waterfall defines one long, segmented process; agile uses a series of short iterations. Waterfall specifies a product on a specific date; agile guarantees a shippable product throughout the development process.

Another difference between waterfall and agile is the specialization of participants. Waterfall projects are divided into phases: analysis, development, testing, etc. and these phases tend to be long themselves. Agile projects have the same activities (analysis, development, testing) but on a much shorter timeframe. A waterfall project may extend for six months, or a year, or several years, and the phases of those projects may extend for months -- or possibly years.

The segmentation of a waterfall project leads to specialization of the participants. It is common to find a waterfall project staffed by analysts, developers, and testers, each a distinct team with distinct management teams. The different teams use tools specific to their tasks: databases for requirements, compilers and integrated development environments, test automation software and test case management systems.

This specialization is possible due to the long phases of waterfall projects. It is reasonable to have team "A" work on the requirements for a project and then (when the requirements are deemed complete) assign team "B" to the development phase. While team "B" develops the project, team "A" can compose the requirements for another project. Specialization allows for efficient scheduling of personnel.

Agile processes, in contrast, have short iterations of analysis, design, coding, and testing. Instead of months (or years), an iteration may be one or two weeks. With such a short period of time, it makes little sense to have multiple teams work on different aspects. The transfer of knowledge from one team to another, a small task in a multi-year project, is a large effort on a two-week iteration. Such inefficiencies are not practical for short projects, and the better approach is for a single team to perform all of the tasks. (Also, the two-week iteration is not divided into a neat linear sequence of analysis, design, development, and test. All activities occur throughout the iteration, with multiple instances of each task.)

A successful agile process needs people who can perform all of the tasks. It needs not specialists but generalists.

Years of waterfall projects have trained people and companies into thinking that specialists are more efficient than generalists. (There may be a bit of Taylorism here, too.) Such thinking is so pervasive that one finds specialization in the company's job descriptions. One can find specific job descriptions for business analysts, developers, and testers (or "QA Analysts").

The shift to agile projects will lead to a re-thinking of specialization. Generalists will be desired; specialists will find it difficult to fit in. Eventually, generalists will become the norm and specialists will become, well, special. Even the job descriptions will change, with the dominant roles being "development team members" with skills in all areas and a few specialist roles for special tasks.

Monday, July 14, 2014

Spreadsheets can help us learn functional programming

Spreadsheets are quite possibly the worst way to learn programming skills. And they may also be the best way to learn the next "wave" of programming skills. A contradiction? Perhaps.

First, by "spreadsheets" I mean the cell grid and its formulas. I am omitting Visual Basic for Applications (VBA) code which can accompany Microsoft Excel sheets.

Spreadsheets as a programming environment are capable and flexible. They let one assemble a set of data and formulas into a meaningful arrangement. They let you format the data. They provide immediate feedback, with the results of changes displayed immediately.

Spreadsheets also violate a lot of the generally accepted principals of program design. They mix input, data, calculation, and output. They have no mechanisms for structuring calculations or encapsulating data. They have no way to isolate data; everything is "global" and any cell can be used by any other cell.

The lack of structural elements means that spreadsheets tend to "scale up" poorly. A small set of data is easily handled. A somewhat larger set of data (if it is the same type of data) is also manageable. A larger collection of different types of data becomes a challenge. Even with multi-page spreadsheets, one starts allocating regions of a sheet for certain data and certain calculations. These regions become problematic as they grow -- especially if they grow at different rates.

There is no way to condense similar calculations. If ten cells (or one hundred cells) all perform the same operation, they must all contain the same formula. Internally, the spreadsheet may optimize memory usage, but from the "programmer's" point of view, the formulas are repeated. If the general formula must change, it must change in all the cells. (While it is easy to change the formula in one cell and then replicate it to the other cells, it is not always easy to identify which other cells use that formula.)

Spreadsheets offer nothing in the way of a high-level view. Everything is viewed at the cell level: to examine a formula, you must look at the specific cell that contains the formula.

So spreadsheets offer power and immediate feedback, two important aspects of programming. Yet they lack the concepts of structured programming (subroutines, control blocks) and the concepts of object-oriented programming (custom types, encapsulation, inheritance).

With all of these omissions, how can spreadsheets be a good way to learn the next programming style?

The answer is functions.

The next wave of programming (as I see it) is functional programming. With functional programming, one defines and uses functions, and functions are first-class constructs of the language. Functions can be passed as arguments to other functions. They can be constructed by functions, and evaluated by functions. The change from object-oriented programming to functional programming is as large (and maybe larger) than the change from structured programming to object-oriented programming.

Spreadsheets can help us learn functional programming because spreadsheets (the core, non-VBA version of spreadsheets) are all about functions. Every cell contains the result of a function. Once a cell's value is defined, it does not change. (Changing cells in the spreadsheet and pressing the "recalc" button is, in essence, modifying the program an re-executing it.)

Now, the comparison is not complete. Functional programming lets you pass functions as arguments to other functions and lets you build functions "on the fly", and spreadsheets let you do neither. So designing a spreadsheet is not the same as programming in a functional language.

But programming spreadsheets is a start. It is a jumping-off point. It is an introduction to some of the concepts of functional programming.

If you want to learn functional programming, perhaps a good place to start is with your local spreadsheet. Turn off (or ignore) the VBA or macro programming. Stick with cells, values, and functions. Avoid the "optimize" or "search for result" capabilities. Design spreadsheets that compute things that are easy in "real" programming languages. You may be stuck at first, given the constraints of spreadsheet calculations. But keep at it. You will learn techniques that can help you with the next wave of programming.

Tuesday, July 8, 2014

The center of the universe is moving

The real universe, the one in which we live and has planets and solar systems and galaxies, has no center. It is "finite but unbounded" which sounds a bit strange until you realize that the surface of the Earth is also finite but unbounded. There is no edge of the Earth, no end, no boundary. Yet it has a finite size. (The Earth as a planet has a center, but the surface of the Earth does not.)

The IT universe does have centers. For decades, the center of the hardware universe has been the desktop PC and the center of the software universe has been Microsoft Windows and applications for Windows.

That is changing. Windows is no longer the software center of the IT universe. The desktop PC is no longer the hardware center of the IT universe.

The center of the IT universe for consumers has shifted to Apple and Google. The popularity of the iPad, the iPhone, and Android phones shows this. Individuals are happy to purchase these devices. PCs, in contrast, are purchased grudgingly. The purchase of a PC does not instill excitement but resentment.

The center of the IT universe for enterprises remains close to PCs and Microsoft Windows, but it too is moving to cloud computing and mobile devices. Microsoft recognizes this; it has been expanding its Azure cloud services and selling tablets and phones. While it has had little success with mobile devices, it does enjoy some with cloud services. Microsoft is supporting multiple operating systems; its Office products now run on Apple iPads and Android devices.

What does this change mean for the rest of us?

Well, for consumers it means that we will see more options. Instead of the old world of "Windows-only applications running on Microsoft Windows on desktops or laptops", we will see services on Azure available on the device of our choosing.

For enterprises, the same options will appear. This fits in with the "Bring Your Own Device" philosophy, which shifts the costs of hardware from employers to employees.

For developers, the picture is more complex. The old method of developing an application (especially an enterprise application) for Windows only (because Windows was the center of the universe) must give way to a process that develops applications for multiple platforms. The new development paradigm must be mobile/cloud with multiple cloud apps and a solid cloud design.

Microsoft is supporting this new paradigm. Azure supports non-Microsoft products such as Linux. Visual Studio supports non-Microsoft products such as Git, and now targets iOS and Android in addition to Windows.

Almost overnight, the modern Windows-only applications have been graduated to the status of legacy systems.

Thursday, July 3, 2014

Bring back "minicomputer"

The term "minicomputer" is making a comeback.

Late last year, I attended a technical presentation in which the speaker referred to his smart phone as a "minicomputer".

This month, I read a magazine website that used the term minicomputer, referring to an ARM device for testing Android version L.

Neither of these devices is a minicomputer.

The term "minicomputer" was coined in the mainframe era, when all computers (well, all electronic computers) were large, required special rooms with dedicated air conditioning, and were attended by a team of operators and field engineers. Minicomputers were smaller, being about the size of a refrigerator and needing only one or two people to care for them. Revolutionary at the time, minicomputers allowed corporate and college departments set up their own computing environments.

I suspect that the term "mainframe" came into existence only after minicomputers obtained a noticeable presence.

In the late 1970s, the term "microcomputer" was used to describe the early personal computers (the Altair 8800, the IMSAI 8080, the Radio Shack TRS-80). But back to minicomputers.

For me and many others, the term "minicomputer" will always represent the department-sized computers made by Digital Equipment Corporation or Data General. But am I being selfish? Do I have the right to lock the term "minicomputer" to that definition?

Upon consideration, the idea of re-introducing the term "minicomputer" may be reasonable. We don't use the term today. Computers are either mainframes (that term is still in use), servers, desktops, laptops, tablets, phones, phablets, and ... whatever the open-board Arduino and Raspberry Pi devices are called. So the term "minicomputer" has been, in a sense, abandoned. As an abandoned term, it can be re-purposed.

But what devices should be tagged as minicomputers? The root "mini" implies small, as it does in "minimum" or "minimize". A "minicomputer" should therefore be "smaller than a (typical) computer".

What is a typical computer? In the 1960s, they were the large mainframes. And while mainframes exist today, one can hardly argue that they are typical: laptops, tablets, and phones are all outselling them. Embedded systems, existing in cars, microwave ovens, and cameras, are probably the most common form of computing device, but I consider them out of the running. First, they are already small and a smaller computer would be small indeed. Second, most people use those devices without thinking about the computer inside. They use a car, not a "car equipped with onboard computers".

So a minicomputer is something smaller that a desktop PC, a laptop PC, a tablet, or a smartphone.

I'm leaning towards the bare-board computers: the Arduino, the BeagleBone, the Raspberry Pi, and their brethren. These are all small computers in the physical sense, smaller than desktop and laptops. They are also small in power; typically they have low-end processors and limited memory and storage, so they are "smaller" (that is, less capable) that a smartphone.

The open-board computers (excuse me, minicomputers) are also a very small portion of the market, just as their refrigerator-sized namesakes.

Let's go have some fun with minicomputers!

Monday, June 30, 2014

Outsource with open source technologies

In the closed-source world, the market encourages duplicate efforts. Lotus creates and sells a spreadsheet, Borland creates and sells a spreadsheet, Microsoft creates and sells a spreadsheet... you get the idea. Each vendor can differentiate their product and make a profit. Vendors keep their source code closed, so each company must create their own spreadsheet from scratch.

The open source world is different. There is no need to create a competing product from scratch. The Libre Office project includes a word processor and a spreadsheet (among other things) and it is open source. If I wanted to create a competing spreadsheet, I could take the code from Libre Office, modify it (a little or a lot) and redistribute it. (The catch is that I would also have to distribute my modified version of the source code.)

Rather than build my own version with private enhancements, it would be easier to suggest my enhancements to the team that maintains Libre Office. With private enhancements, I have to make the same changes with each new release of Libre Office (assuming I want the latest version); by submitting my enhancements (and getting them included) they then become part of the product and I get them with each update. (Of course, so does everyone else.)

Open source is not "one solution only". It has different software packages that exist in the same "space". There are a multitude of text editors. There are different display managers for Linux. There are multiple windowing systems. One can even argue that the languages Awk, Perl, Python, and Ruby all compete. There can be competing efforts in open source.

The closed-source world does not always provide competition. It has settled on some "winner" programs: Microsoft Word for word processing. Microsoft Excel for spreadsheets. Photoshop for editing pictures. Competitors may emerge, but the cost of entry to the market it high.

In general, I think that the overall trend (for closed source and open source) is to move to a single package. The "network effect" exerts a gentle but consistent pull for a single solution in both worlds. The open source market falls quicker than the closed-source market; for-profit vendors have more to gain by keeping their product in the market. They resist the tug of the network effect.

Open source becomes a more efficient space. With fewer people working to create similar-but-different products, the open source world can work on a more diverse set of problems. Or it can invest less effort for the same result.

Many companies invest effort in core competencies and outsource non-essential activities. Open source may be the cost-effective method for those non-essential activities.

Sunday, June 15, 2014

Untangle code with automated testing

Of all of the tools and techniques for untangling code, the most important is automated testing.

What does automated testing have to do with the untangling of code?

Automated testing provides insurance. It provides a back-stop against which developers can make changes.

The task of untangling code, of making code readable, often requires changes across multiple modules and multiple classes. While a few improvements can be made to single modules (or classes), most require changes in multiple modules. Improvements can require changes to the methods exposed by a class, or remove access to member variables. These changes ripple though other classes.

Moreover, the improvement of tangled code often requires a re-thinking of the organization of the code. You move functions from one class to another. You rename variables. You split classes into smaller classes.

These are significant changes, and they can have significant effects on the operation of the code. Of course, while you want to change the organization of the code you want the results of calculations to remain unchanged. That's how automated tests can help.

Automated tests verify that your improvements have no effect on the calculations.

The tests must be automated. Manual tests are expensive: they require time and attention. Manual tests are easy to skip. Manual tests are easy to "flub". Manual tests can be difficult to verify. Automated tests are consistent, accurate, and most of all, cheap. They do not require attention or effort. They are complete.

Automated tests let programmers make significant improvements to the code base and have confidence that their changes are correct. That's how automated tests help you untangle code.

Wednesday, June 11, 2014

Learning to program, without objects

Programming is hard.

Object-oriented programming is really hard.

Plain (non-object-oriented) programming has the concepts of statements, sequences, loops, comparisons, boolean logic, variables, variable types (text and numeric), input, output, syntax, editing, and execution. That's a lot to comprehend.

Object-oriented programming has all of that, plus classes, encapsulation, access, inheritance, and polymorphism.

Somewhere in between the two is the concept of modules and multi-module programs, structured programming, subroutines, user-defined types (structs), and debugging.

For novices, the first steps of programming (plain, non-object-oriented programming) are daunting. Learning to program in BASIC was hard. (The challenge was in organizing data into small, discrete chunks and processes into small, discrete steps.)

I think that the days of an object-oriented programming language as the "first language to learn" are over. We will not be teaching C# or Java as the introduction to programming. (And certainly not C++.)

The introduction to programming will be with languages that are not necessarily object-oriented: Python or Ruby. Both are, technically, object-oriented programming languages, supporting classes, inheritance, and polymorphism. But you don't have to use those features.

C# and Java, in contrast, force one to learn about classes from the start. One cannot write a program without classes. Even the simple "Hello, world!" program in C# or Java requires a class to hold main() .

Python and Ruby can get by with a simple

print "Hello, world"

and be done with it.

Real object-oriented programs (ones that include a class hierarchy and inheritance and polymorphism) require a bunch of types (at least two, probably three) and operations complex enough to necessitate the need for so many types. The canonical examples of drawing shapes or simulating an ATM are complex enough to warrant object-oriented code.

A true object-oriented program has a minimum level of complexity.

When learning the art of programming, do we want to start with that level of complexity?

Let us divide programming into two semesters. The first semester can be devoted to plain programming. The second semester can introduce object-oriented programming. I think that the "basics" of plain programming are enough for a single semester. I also think that one must be comfortable with those basics before one starts with object-oriented programming.

Tuesday, June 10, 2014

Slow and steady wins the race -- or does it?

Apple and Google run at a faster pace than their predecessors. Apple introduces new products often: new iPhones, new iPad tablets, new versions of iOS; Google does the same with Nexus phones and Android.

Apple and Google's quicker pace is not limited to the introduction of new products. They also drop items from their product line.

The "old school" was IBM and Microsoft. These companies moved slowly, introduced new products and services after careful planning, and supported their customers for years. New versions of software were backwards compatible. New hardware platforms supported the software from previous platforms. When a product was discontinued, customers were offered a path forward. (For example, IBM discontinued the System/36 minicomputers and offered the AS/400 line.)

IBM and Microsoft were the safe choices for IT, in part because they supported their customers.

Apple and Google, in contrast, have dropped products and services with no alternatives. Apple dropped .Mac. Google dropped their RSS reader. (I started this rant when I learned that Google dropped their conversion services from App Engine.)

I was about to chide Google and Apple for their inappropriate behavior when I thought of something.

Maybe I am wrong.

Maybe this new model of business (fast change, short product life) is the future?

What are the consequences of this business model?

For starters, businesses that rely on these products and services will have to change. These businesses can no longer rely on long product lifetimes. They can no longer rely on a guarantee of "a path forward" -- at least not with Apple and Google.

Yet IBM and Microsoft are not the safe havens of the past. IBM is out of the PC business, and getting out of the server business. Microsoft is increasing the frequency of operating system releases. (Windows 9 is expected to arrive in 2015. The two years of Windows 8's life are much shorter than the decade of Windows XP.) The "old school" suppliers of PC technology are gone.

Companies no longer have the comfort of selecting technology and using it for decades. Technology will "rev" faster, and the new versions will not always be backwards compatible.

Organizations with large IT infrastructures will find that their technologies are less homogeneous. Companies can no longer select a "standard PC" and purchase it over a period of years. Instead, every few months will see new hardware.

Organizations will see software change too. New versions of operating systems. New versions of applications. New versions of online services (software as a service, platform as a service, infrastructure as a service, web services) will occur -- and not always on a convenient schedule.

More frequent changes to the base upon which companies build their infrastructure will mean that companies spend more time responding to those changes. More frequent changes to the hardware will mean that companies have more variations of hardware (or they spend more time and money keeping everyone equipped with the latest).

IT support groups will be stressed as they must learn the new hardware and software, and more frequently. Roll-outs of internal systems will become more complex, as the target base will be more diverse.

Development groups must deliver new versions of their products on a faster schedule, and to a broader set of hardware (and software). It's no longer acceptable to deliver an application for "Windows only". One must include MacOS, the web, tablets, and phones. (And maybe Kindle tablets, too.)

Large organizations (corporations, governments, and others) have developed procedures to control the technology within (and to minimize costs). Those procedures often include standards, centralized procurement, and change review boards (in other words, bureaucracy). The outside world (suppliers and competitors) cares not one whit about a company's internal bureaucracy and is changing.

The slow, sedate pace of application development is a thing of the past. We live in faster times.

"Slow and steady" used to win. The tortoise would, in the long run, win over the hare. Today, I think the hare has the advantage.