Thursday, January 8, 2015

Hardwiring the operating system

I tend to think of computers as consisting of four conceptual parts: hardware, operating system, application programs, and my data.

I know that computers are complex objects, and each of these four components has lots of subcomponents. For example, the hardware is a collection of processor, memory, video card, hard drive, ports to external devices, and "glue" circuitry to connect everything. (And even that is omitting some details.)

These top-level divisions, while perhaps not detailed, are useful. They allow me to separate the concerns of a computer. I can think about my data without worrying about the operating system. I can consider application programs without bothering with hardware.

It wasn't always this way. Oh, it was for personal computers, even those from the pre-IBM PC days. Hardware like the Altair was sold as a computing box with no operating system or software. Gary Kildall at Digital Research created CP/M to run on the various hardware available and designed it to have a dedicates unit for interfacing with hardware. (That dedicated unit was the Basic Input-Output System, or 'BIOS'.)

It was the very early days of computers that saw a close relationship between hardware, software, and data. Very early computers had no operating systems (operating systems themselves designed to separate the application program from the hardware). Computers were specialized devices, tailored to the task.

IBM's System/360 is recognized as the first general computer: a single computer that could be programmed for different applications, and used within an organization for multiple purposes. That computer began us on the march to separate hardware and software.

The divisions are not simply for my benefit. Many folks who work to design computers, build applications, and provide technology services find these divisions useful.

The division of computers into these four components allows for any one of the components to be swapped out, or moved to another computer. I can carry my documents and spreadsheets (data) from my PC to another one in the office. (I may 'carry' them by sending them across a network, but you get the idea.)

I can replace a spreadsheet application with a different spreadsheet application. Perhaps I replace Excel 2010 with Excel 2013. Or maybe change from Excel to another PC-based spreadsheet. The new spreadsheet software may or may not read my old data, so the interchangeability is not perfect. But again, you get the idea.

More than half a century later, we are still separating computers into hardware, operating system, application programs, and data.

And that may be changing.

I have several computing devices. I have a few PCs, including one laptop I use for my development work and e-mail. I have a smart phone, the third I have owned. I have a bunch of tablets.

For my PCs, I have installed different operating systems and changed them over time. The one Windows PC started with Windows 7. I upgraded it to Windows 8 and it now runs Windows 8.1. My Linux PCs have all had different releases of Ubuntu, and I expect to update them with the next version of Ubuntu. Not only do I get major versions, but I receive minor updates frequently.

But the phones and tablets are different. The phones (an HTC and two Samsung phones) ran a single operating system since I took them out of the box. (I think one of them got an update.) On of my tablets is an old Viewsonic gTablet running Android 2.2. There is no update to a later version of Android -- unless I want to 'root' the tablet and install another variant of Android like Cyanogen.

PCs get new versions of operating systems (and updates to existing versions). Tablets and phones get updates for applications, but not for operating systems. At least nowhere near as frequently as PCs.

And I have never considered (seriously) changing the operating system on a phone or tablet.

Part of this change is due, I believe, to the change in administration. We who own PCs administer the PC and decide when to update software. But we who think we own phones and tablets do *not* administer the tablet. We do not decide when to update applications or operating systems. (Yes, there are options to disable or defer updates, in Android and iOS.)

It is the equipment supplier, or the cell service provider, who decides to update operating systems on these devices. And they have less incentive to update the operating system than we do. (I suspect updates to operating systems generate a lot of calls from customers, either because they are confused or the update broke some functionality.)

So I see the move to smart phones and tablets, and its corresponding shift of administration from user to provider, as a step in synchronizing hardware and operating system. And once hardware and operating system are synchronized, they are not two items but one. We may, in the future, see operating systems baked in to devices with no (or limited) ways to update them. Operating systems may be part of the device, burned into a ROM.

Tuesday, January 6, 2015

The cloud brings change

Perhaps the greatest effect that cloud computing has on operations is change -- or more specifically, a move away from the strict policies against change.

Before cloud technology, operations viewed hardware and software as things that should change rarely and under strictly controlled conditions. Hardware was upgraded only when needed, and only after long planning sessions to review the new equipment and ensure it would work. Software was updated only during specified "downtime windows", typically early in the morning on a weekend when demand would be low.

The philosophies of "change only when necessary" and "only when it won't affect users" were driven by the view of hardware and software. Before cloud computing, most people had a mindset that I call the "mainframe model".

In this "mainframe model," there is one and only one computer. In the early days of computing, this was indeed the case. Turning the computer off to install new hardware meant that no one could use it. Since the entire company ran on it, a hardware upgrade (or a new operating system) meant that the entire company had to "do without". Therefore, updates had to be scheduled to minimize their effect and they had to be carefully planned to ensure that everything would work upon completion.

Later systems, especially web systems, used multiple web servers and often multiple data centers with failover, but people kept the mainframe model in their head. They carefully scheduled changes to avoid affecting users, and they carefully planned updates to ensure everything would work.

Cloud computing changes that. With cloud computing, there is not a single computer. There is not a single data center (if you're building your system properly). By definition, computers (usually virtualized) can be "spun up" on demand. Taking one server offline does not affect your customers; if demand increases you simply create another server to handle the additional workload.

The ability to take servers offline means that you can relax the scheduling of upgrades. You do not need to wait until the wee hours of the morning. You do not need to upgrade all of your servers at once. (Although running servers with multiple versions of your software can cause other problems. More on that later.)

Netflix has taken the idea further, creating tools that deliberately break individual servers. By doing so, Netflix can examine the failures and design a more robust system. Once a single server fails, other servers take over the work -- or so Netflix would like. If servers don't pick up the workload, Netflix has a problem and changes their code.

Cloud technology lets us use a new model, what I call the "cloud model". This model allows for failures at any time -- not just from servers failing but from servers being taken offline for upgrades or maintenance. Those upgrades could be hardware, virtualized hardware, operating systems, database schemas, or application software.

The cloud model allows for change. It requires a different set of rules. Instead of scheduling all changes for a single downtime window, it distributes changes over time. It mandates that newer versions of applications and databases work with older versions, which probably means smaller, more incremental changes. It also encourages (perhaps requires) software to administer the changes and ensure that all servers get the changes. Instead of growing a single tree you are growing a forest.

The fact that cloud technology brings change, or changes our ideas of changes, should not be a surprise. Other technologies and techniques (agile development, dynamic languages) have been moving in the same direction. Even Microsoft and Apple are releasing products more quickly. Change is upon us, whether we want it or not.

Tuesday, December 30, 2014

Agile is not for everyone

Agile development is a hot topic for many project managers. It is the latest management fad, following total cost of ownership (TCO) and total quality management (TQM). (Remember those?)

So here is a little bit of heresy: Agile development methods are not for every project.

To use agile methods effectively, one must understand what they offer -- and what they don't.

Agile methods make one guarantee: that your system will always work, for the functionality that has been built. Agile methods (stakeholder involvement, short development cycles, automated testing) ensures that the features you build will work as you expect. Even as you add new features, your automated tests ensure that old features still work. Thus, you can always send the most recent version of your system to your customers.

But agile methods don't make very promise. Specifically, they don't promise that all of your desired features will be available (and working) on a specific date. You may have a list of one hundred features; agile lets you implement those features in a desired order but does not guarantee that they will all be completed by an arbitrary date. The guarantee is only that the features implemented by that date will work. (And you cannot demand that all features be implemented by that date -- that's not agile development.)

Agile does let you make projections about progress, once you have experience with a team, the technology, and a set of features for a system. But these projections must be based on experience, not on "gut feel". Also, the projections are just that: projections. They are estimates and not guarantees.

Certain businesses want to commit to specific features on specific dates, perhaps to deliver a system to a customer. If that is your business, that you should look carefully at agile methods and understand what they can provide. It may be that the older "waterfall" methods, which do promise a specific set of features on a specific date, are a better match.

Friday, December 26, 2014

Google, Oracle, and Java

Apple has a cozy walled garden for its technology: Apple devices running Apple operating systems and Apple-approved apps written in Apple-controlled languages (Objective-C and now Swift).

Microsoft is building a walled garden for its technology. Commodity devices with standards set by Microsoft, running Microsoft operating systems and apps written in Microsoft-controlled languages (C#, F#, and possibly VB.NET). Microsoft does not have the same level of control over applications as Apple; desktop PCs allow anyone with administrator privileges to install any app from any source.

Google has a walled garden for its technology (Android), but its control is less than that of Apple or Microsoft. Android runs on commodity hardware, with standards set by Google. Almost anyone can install apps on their Google phone or tablet. And interestingly, the Android platform apps run in Java, a language controlled by... Oracle.

This last aspect must be worrying to Google. Oracle and Google have a less than chummy relationship, with lawsuits about the Java API. Basing a walled garden on someone else's technology is risky.

What to do? If I were Google, I would consider changing the language for the Android platform. That's not a small task, but the benefits may outweigh the costs. Certainly their current apps would have to be re-written for the New language. A run-time engine would have to be included in Android. The biggest task would be convincing the third-party developers to change their development process and their existing apps. (Some apps may never be converted.)

Which language to pick? That's an easy call. It should be a language that Google controls: Dart or Go. Dart is designed as a replacement for JavaScript, yet could be used for general applications. Go is, in my opinion, the better choice. It *is* designed for general applications, and includes support for concurrency.

A third candidate is Python. Google supports Python in their App Engine cloud platform, so they have some familiarity with it. No one company controls it (Java was controlled by Sun prior to Oracle) so it is unlikely to be purchased.

Java was a good choice for launching the Android platform. I think the languages Go and Python are better choices for Android now.

Let's see what Google thinks.

Sunday, December 21, 2014

Technology fragmentation means smaller, simpler systems

In the past, IT shops standardized their technologies, often around the vendor deemed the industry leader. In the 1960s and 1970s, that leader was IBM. They offered products for all of your computing needs, from computers to terminals to printers and even punch cards.

In the 1980s and 1990s, it was Microsoft. They offered products for all of your computing needs, from operating systems to compilers to office suites to user management. (Microsoft offered little in hardware, but then hardware was considered a commodity and available from multiple sources.)

Today, things are not so simple. No one vendor that provides products and services for "all of your computing needs". The major vendors are Microsoft, Apple, Amazon.com, Google, and a few others.

Microsoft has a line of offerings, but it is weak in the mobile area. Sales of Microsoft tablets, Microsoft phones, and Windows Mobile are anemic. Anyone who wants to offer services in the mobile market must deal with either Apple or Google (and preferably both, as neither has a clear lead).

Apple has a line of offerings, but is weak in the enterprise area. They offer tools for development of apps to run on their devices but little support for server-side development. Anyone who wants to offer services that use server-side applications must look to Microsoft or Google.

Amazon.com offers cloud services and consumer devices (the Kindle) but is weak on development tools and transaction databases. Google offers cloud services and consumer devices as well, but lacks the enterprise-level administration tools.

Complicating matters is the plethora of open-source tools, many of which are not tied to a specific vendor. The Apache web server, the Perl and Python languages, several NoSQL databases, and development tools are available but not with the blessings (and support) from vendors.

Development teams must now cope with the following:

Browsers: Internet Explorer, Chrome, Firefox, Safari, and possibly Opera
Desktop operating systems: Windows (versions 7, 8, and 10), MacOS X, Linux (Ubuntu, SuSE, and Red Hat)
Platforms: desktop, tablet, phone
Mobile operating systems: iOS, Android, and possibly Blackberry and Windows
Database technologies: SQL and NoSQL
HTTP servers: Apache, NGINX, and IIS
Programming languages: C#, Java, Swift, Python, Ruby, and maybe C++ or C
Cloud platforms: Amazon.com AWS, Microsoft Azure, Google cloud
Cloud paradigms: public cloud, private cloud, or hybrid

I find this an impressive list. You may have some factors of your own to add. (Then the list is even more impressive.)

This fragmentation of technology affects your business. I can think of several areas of concern.

The technology for your systems You have to decide which technologies to use. I suppose you could pick all of them, using one set of technology for one project and another set of technology for another project. That may be useful in the very short term, but may lead to an inconsistent product line. For example, one product may run on Android phones and tablets (only), and another may run on Apple phones and tablets (only).

Talent for that technology Staffing teams is an on-going effort. If your project uses HTML 5, CSS, JavaScript, and Python with a NoSQL database, you will need developers with that set of skills. But developers don't know everything (even though some may claim that they do) and you may find few with that exact set of technology. Are you willing to hire someone without on of your desired skills and let that person learn them?

Mergers and acquisitions Combining systems may be tricky. If you acquire a small firm that uses native Android apps and a C#/.NET server-side system, how do you consolidate that system into your HTML, CSS, JavaScript, Python shop? Or do you maintain two systems with distinct technologies? (Refer to "inconsistent offerings", above.)

There are no simple answers to these questions. Some shops will standardize on a set of technologies, combining offerings from multiple vendors. Some will standardize on a vendor, with the hope of a future industry leader that sets the standard for the market. Many will probably have heated arguments about their selections, and some individuals may leave, staying more loyal to the technology than the employer.

My advice is to keep informed, set standards when necessary, and keep systems small and simple. Position your technology to shift with changes in the industry. (For example, native apps on Apple devices will shift from Objective-C to Swift.) If your systems are large and complicated, redesign them to be smaller and simpler.

Build and maintain systems with the idea that they will change.

They probably will. Sooner than you think.

Tuesday, December 16, 2014

Goodbye, Dr. Dobbs

Today the Dr. Dobbs website, sequel of the august publication of the same name, announced that it was going out of business.

It is an announcement that causes me some sadness. Dr. Dobbs was the last of the "originals", the publications of the Elder Days before the web, before the internet, before Windows, and even before the IBM PC.

It was a very different time. Computers were slow, low-powered, expensive, and rare. The personal computers at the time used processors that ran at 1 MHz or 2 MHz, had (perhaps) 64KB of RAM, and often stored data on cassette tapes. A typical computer system cost anywhere from $1000 to $4000 (in 1980 dollars!).

Magazines like Dr. Dobbs were the lifeblood of the industry. They provided news, product announcements, reviews, and articles on theory and on practice. In a world without the internet and web sites, magazines were the way to learn about these strange new devices called computers.

Today, computers are fast, powerful, cheap, and common. So common and so cheap that people leave working computers in the trash. So fast and powerful that we no longer care (much) about the processor speed or memory size.

Not only are computers plentiful, but information about computers is plentiful. Various web sites provide news, opinion, and technical information. We don't need a single "go to" site for all of that information; Google can find it for us.

So, goodbye to Dr Dobbs. You served us well. You helped us through a difficult time, and shared information with many. You were one of the factors in the success of those early days, and therefore the success of the PC industry today. You will be missed.

Sunday, December 14, 2014

Software doesn't rot - or does it?

Is it possible for software to rot? At first glance, it seems impossible. Software is one of the more intangible constructs of man. It is not exposed to the elements. It does not rust or decompose. Software, in its executable form, is nothing more than digitally-stored bits. Even its source form is digitally-stored bits.

The hardware on which those bits are stored may rot. Magnetic tapes and disks lose their field impressions, and their flexibility, over time. Flash memory used in USB thumb drives lasts a long time, but it too can succumb to physical forces. Even punch cards can burn, become soggy, and get eaten by insects. But software itself, being nothing more than ideas, lasts forever.

Sort of.

Software doesn't rot, rust, decompose, or change. What changes is the items that surround the software, that interact with the software. These change over time. The changes are usually small and gradual. When those item change enough to affect the software, we notice.

What are these items?

User expectations Another intangible changing relative to the intangible software. Users, along with product owners, project managers, support personnel, and others, all have expectations of software. Those expectations are formed not just from the software and its operating environment, but also from other software performing similar (or different) tasks.

The discovery of defects Software is complex, and more software has some number of defects. We attempt to build software free of defects, but the complexity of computer systems (and the business rules behind computer systems) makes that difficult. Often, defects are built in to the system and remain unnoticed for weeks, months, or even years. The discovery of a defect is a bit of "rot".

The business environment New business opportunities, new markets, and new product lines can cause new expectations of software. Changes in law, from new regulations to a different rate for sales tax, can affect software.

Hardware Software tends to outlive hardware. Moving from one computer to a later model can expose defects. Games for the original IBM PC failed (or worked poorly) on the IBM PC with its faster processor. Microsoft Xenix ran on the Intel 80286 but not the 80386 because it used reserved flags in the processor status word. (The 80286 allowed the use of reserved bits; the 80386 enforced the rules.) The install program for Wordstar, having worked for years, would fail on PCs with more than 512K of RAM (this was in the DOS days), a lurking defect exposed by a change in hardware.

The operating system New versions of an operating system can fix defects in the old version. If application software took advantage of that defect (perhaps to improve performance) then the software fails on the later version. Or a new version implies new requirements, such as the requirements for branding your software "Windows-95 ready". (Microsoft imposed several requirements for Windows 95 and many applications had to be adjusted to meet these rules.)

The programming language Microsoft made numerous changes to Visual Basic. Many new versions broke the code from previous versions, causing developers to scramble and devote time to incorporating the changes. The C++ and C# languages have been stable, but even they have had changes.

Paradigm shifts We saw large changes when moving from DOS to Windows. We saw large changes when moving from desktop to web. We're seeing large changes with the move from web to mobile. We want software to operate in the new paradigm, but new paradigms are quite different from old ones and the software must change (often drastically).

The availability of talent Languages and programming technologies rise and fall in popularity. COBOL was once the standard language of business applications; today there are few who know it and fewer who who teach it. C++ is following a similar path, and the rise of NoSQL databases means that SQL will see a decline in available talent. You can stay with the technology, but getting people to work on it will become harder -- and more expensive.

All of these factors surround software (with the exception of lurking defects). The software doesn't change, but these things do, and the software moves "out of alignment" with the needs and expectations of the business. Once out of alignment, we consider the software to be "defective" or "buggy" or "legacy".

Our perception is that software "gets out of alignment" or "rots". It's not quite true -- the software has not changed. The surrounding elements have changed, including our expectations, and we perceive the changes as "software rot".

So, yes, software can rot, in the sense that it does not keep up with our expectations.