Sunday, April 28, 2013

C++ without source (cpp) files

A thought experiment: can we have C++ programs without source files (that it, without .cpp files)?

The typical C++ program consists of header files (.h) and source files (.cpp). The header files provide definitions for classes, and the source files provide the definition of the implementations.

Yet the C++ language allows one to define function implementation in the header files. We typically see this only for short functions. To wit:

random_file.h

class random_class
{
private:
    int foo_;
public:
    random_class( int foo ) : foo_(foo);
    int foo( void ) { return foo_ };
}

This code defines a small class that contains a single value and has no methods. The sole member variable is initialized in the constructor.

Here's my idea: Using the concepts of functional programming (namely immutable variables that are initialized in the constructor), one can define a class as a constructor and a bunch of read-only accessors.

If we keep class size to a minimum, we can define all classes in header files. The constructors are simple, and the accessor functions simply return calculated values. There is no need for long methods.

(Yes, we could define long functions in headers, but that seems to be cheating. We allow short functions in headers and exile long functions into .cpp files.)

Such a design is, I think, possible, although perhaps impractical. It may be similar to the chemists' "perfect gas", an abstraction that is nice to conceive but unseen in the real world.

Yet a "perfect gas" of a class (perhaps a "perfect class") may be possible for some classes in a program. Those perfect classes would be small, with few member variables and only accessor functions. Its values would be immutable. The member variables may be objects of smaller classes (perhaps perfect classes) with immutable values of their own.

This may be a way to improve code quality. My experience shows that immutable objects are much easier to code, to use, and to debug. If we build simple immutable classes, then we can code them in header files and we can discard the source files.

Coding without source files -- no there is an idea for the future.

Saturday, April 27, 2013

"Not invented here" works poorly with cloud services

This week, colleagues were discussing the "track changes" feature of Microsoft Word. They are building an automated system that uses Word at its core, and they were encountering problems with the "track changes" feature.

This problem lead me to think about system design.

Microsoft Word, while it has a COM-based engine and separate UI layer, is a single, all-in-one, solution for word processing. Every function that you need (or that Microsoft thinks you need) is included in the package.

This design has advantages. Once Word is installed, you have access to every feature. All of the features work together. A new version of Word upgrades all of the features -- none are left behind.

Yet this design is a form of "not invented here". Microsoft supplies the user interface, the spell-check engine and dictionary, the "track changes" feature, and everything else. Even when there were other solutions available, Microsoft built their own. (Or bought an existing solution and welded it into Word.)

Word's design is also closed. One cannot, for example, replace Microsoft's spell-checker with another one. Nor can one replace the "track changes" feature with your own version control system. You are stuck with the entire package.

This philosophy worked for desktop PC software. It works poorly with cloud computing.

In cloud computing, every feature in your system is a service. Instead of a monolithic program, a system is a collection of services, each providing some small amount of well-defined processing. Cloud computing needs this design to scale to larger workloads; you can add more servers for the services that see more demand.

With a system built of services, you must decide on the visibility of those services. Are they open to all? Closed to only your processes? Or do you allow a limited set of users (perhaps subscribers) to use them?

Others must make this decision too. The US Postal Service may provide services for address validation, separate and independent from mailing letters. Thus companies like UPS and FedEx may choose to use those services rather than build their own.

Some companies already do this. Twitter provides information via its API. Lots of start-ups provide information and data.

Existing companies and organizations provide data, or will do so in the future. The government agency NOAA may provide weather information. The New York Stock Exchange may provide stock prices and trade information (again, perhaps only to subscribers). Banks may provide loan payment calculations.

You can choose to build a system in the cloud with only your data and services. Or you can choose to use data and services provided by others. Both have advantages (and risks).

But the automatic reflex of "not invented here" has no place in cloud system design. Evaluate your options and weigh the benefits.

Wednesday, April 24, 2013

Perhaps history should be taught backwards

A recent trip to the local computer museum (a decent place with mechanical computation equipment, various microcomputers and PCs, a DEC PDP-8/m and PDP-12, and a Univac 460) gave me time to think about our techniques for teaching history.

The curator gave us a tour, and he started with the oldest computing devices in his collection. Those devices included abaci, Napier's Bones, a slide rule, and electro-mechanical calculators. We then progressed forward in time, looking at the Univac and punch cards, the PDP-8 and PDP-12 and Teletype terminals, then the Apple II and Radio Shack TRS-80 microcomputers, and ended with modern-day PCs and tablets.

The tour was a nice, orderly progression through time,

And perhaps the wrong sequence.

A number of our group were members of the younger set. They had worked with smart phones and iPads and PCs, but nothing earlier than that. I suspect that for them, the early parts of the tour -- the early computation technologies -- were difficult.

Computing technologies have changed over time. Even the concept of computing has changed. Early devices (abaci, slide rules, and even hand-held calculators) were used to perform mathematical operations; the person knew the theory and overall purpose of the computations.

Today, we use computing devices for many purposes, and the underlying computations are distant (and hidden) from the user. Our purposes are not merely word processing and spreadsheets, but web pages, Twitter feeds, and games. (And blog posts.)

The technologies we use to calculate are different: today's integrated circuits do the work of 1960's large discrete electronics, which do the work of lots of wheels and cogs of a mechanical calculator. Even storage has changed: today's flash RAM holds data; in the 1970s it was core memory; in the 1950s it was mercury delay lines.

The changes in technology were mostly gradual with a few large jumps. Yet the technologies of today are sufficiently different from the early technologies that one is not recognizable from the other. Moving from today to the beginning requires a big jump in understanding.

Which is why I question the sequence of history. For computing technology, starting at the beginning requires a good understanding of the existing technology and techniques, and even then it is hard to see how an abacus or slide rule relates to today's smart phone.

Perhaps we should move in the reverse direction. Perhaps we should start with today's technology, on the assumption that people know about it, and work backwards. We can move to slightly older systems and compare them to today's technology. Then repeat the process, moving back into the past.

For example, storage. Showing someone a punch card, for example, means very little (unless they know the history). But leading that same person through technology (SD RAM chips, USB memory sticks, CD-ROMs, floppy disks, early floppy disks, magnetic drums, magnetic tape, paper tape, and eventually punch cards) might give that person an easier time. For someone who does not know the tech, studying something close to current technology (CD-ROMs) and then learning about the previous tech (floppy disks) might be better. It avoids the big leap into the past.

Sunday, April 21, 2013

The post-PC era is about coolness or lack thereof

Some have pointed to the popularity of tablets as the indicator for the imminent demise of the PC. I look at the "post PC" era not as the death of the PC, but as something worse: PCs have become boring.

Looking back, we can see that PCs started with lots of excitement and enthusiasm, yet that excitement has diminished over time.

First, consider hardware:

  • Microcomputers were cool (even with just a front panel and a tape drive for storage)
  • ASCII terminals were clearly better than front panels
  • Storing data on floppy disks was clearly better than storing data on tape
  • Hard drives were better than floppy disks
  • Color monitors were better than monochrome displays
  • High resolution color monitors were better than low resolution color monitors
  • Flat panel monitors were better than CRT monitors

These were exciting improvements. These changes were *cool*. But the coolness factor has evaporated. Consider these new technologies:

  • LED monitors are better than LCD monitors, if you're tracking power consumption
  • Solid state drives are better than hard drives, if you look hard enough
  • Processors after the original Pentium are nice, but not excitingly nice

Consider operating systems:

  • CP/M was exciting (as anyone who ran it could tell you)
  • MS-DOS was clearly better than CP/M
  • Windows 3.1 on DOS was clearly better than plain MS-DOS
  • Windows 95 was clearly better than Windows 3.1
  • Windows NT (or 2000) was clearly better than Windows 95 (or 98, or ME)

But the coolness factor declined with Windows XP and its successors:

  • Windows XP was *nice* but not *cool*
  • Windows Vista was not clearly better than Windows XP -- and many have argued that it was worse
  • Windows 7 was better than Windows Vista, in that it fixed problems
  • Windows 8 is (for most people) not cool

The loss of coolness is not limited to Microsoft. A similar effect happened with Apple's operating systems.

  • DOS (Apple's DOS for Apple ][ computers) was cool
  • MacOS was clearly better than DOS
  • MacOS 9 was clearly better than MacOS 8
  • Mac OSX was clearly better than MacOS 9

But the Mac OSX versions have not been clearly better than their predecessors. They have some nice features, but the improvements are small, and a significant number of people might say that the latest OSX is not better than the prior version.

The problem for PCs (including Apple Macintosh PCs) is the loss of coolness. Tablets are cool; PCs are boring. The "arc of coolness" for PCs saw its greatest rise in the 1980s and 1990s, a moderate rise in the 2000s, and now sees decline.

This is the meaning of the "post PC era". It's not that we give up PCs. It's that PCs become dull and routine. PC applications become dull and routine.

It also means that there will be few new things developed for PCs. In a sense, this happened long ago, with the development of the web. Then, the Cool New Things were developed to run on servers and in browsers. Now, the Cool New Things will be developed for the mobile/cloud platform.

So don't expect PCs and existing PC applications to vanish. They will remain; it is too expensive to re-build them on the mobile/cloud platform.

But don't expect new PC applications.

Welcome to the post-PC era.

Thursday, April 18, 2013

Excel is the new BASIC

BASIC is a language that, to quote Rodney Dangerfield, gets no respect.

Some have quipped that "those whom the gods would destroy... they first teach BASIC".

COBOL may be disparaged, but only to a limited extent. People, deep down, know that COBOL is running many useful system. (Things like banking, airline reservations, and perhaps most importantly payroll.) COBOL does work, and we respect it.

BASIC, on the other hand, tried to be useful but never really made it. Despite Microsoft's attempt with its MBASIC product, Digital Research with its CBASIC compiler, Digital Equipment Corporation with its various implementations of BASIC, and others, BASIC was always second place to other programming languages. For microcomputers, those languages were assembly language, Pascal, and C.

(I'm limiting this to the interpreter BASIC, the precursor to Visual Basic. Microsoft's Visual Basic was capable and popular. It was used for many serious applications, some of which are probably still running today.)

BASIC's challenge was its design. It was a language for learning the concepts of programming, not building large, serious programs. The name itself confirms this: Beginner's All-purpose Symbolic Instruction Code.

More than the name, the constructs of the programming language are geared for small programs. This is due to the purpose of BASIC (a better FORTRAN for casual users) and the timing of BASIC (the nascent "structured programming" movement had yet to prove itself).

Without the constructs structured programming ("while" loops and "if/then/else" statements), programmers must either build their programs with structured concepts made of smaller elements, or build unstructured programs. BASIC allows you to build structured programs, but provides no assistance. Worse, BASIC relies on GOTO to build most control flows.

In contrast, modern programming languages such as Java, C#, Python, and Ruby provide the constructs for structured programming and don't offer the GOTO statement.

The people who learned to program in BASIC (and I am one of them) learned to program poorly, and we have paid a heavy price for it.

But what does this have to do with Microsoft Excel?

Excel is the application taught to people for managing data. (Microsoft Word is suitable for documents, and Powerpoint is suitable for presentations, but Excel is *the* application for data. I suspect more people know and use Excel than all of the people using Word, Powerpoint, and Access.)

Excel offers the same undisciplined approach to applications. Spreadsheets contain data and formulas (and VBA macros, but I will ignore those for now).

One might argue that Excel is a spreadsheet, different from a programming language such as BASIC. Yet the differences are small. Excel, with its formulas alone, is a programming system if not a language.

The design of Excel (and other spreadsheets, going back to Visicalc) provides no support for structure or discipline. Formulas can collect data from anywhere in the spreadsheet. There is no GOTO keyword, but one can easily build a tangled mess.

Microsoft Excel is the new BASIC: useful, popular, and undisciplined. Worse than BASIC, since Excel is the premier tool for manipulating data. BASIC, for all of its flaws, was always second to some other language.

In one way, Excel is not as bad as BASIC. Formulas may collect data from any location in the spreadsheet, but they (for the most part) modify only their own contents. This provides a small amount of order to spreadsheet-programs.

We need a new paradigm for data management. Just as programming had its "structured programming" movement which lead to the use of constructs that improved the reliability and readability of programs, spreadsheets need a new approach to the organization of data and the types of formulas that can be used on that data.

Tuesday, April 16, 2013

File Save No More

The new world of mobile/cloud is breaking many conventions of computer applications.

Take, for example, the long-established command to save a file. In Windows, this has been the menu option File / Save, or the keyboard shortcut CTRL-S.

Android apps do not have this sequence. In fact, they have no sequence to save data. Instead, they save your data as you enter it, or when you dismiss a dialog.

Not only Android apps (and I suspect iOS apps), but Google web apps exhibit this behavior too. Use Google Drive to create a document or a spreadsheet.

Breaking the "save file" concept allows for big changes. It lets us get rid of an operation. It lets us get rid of menus.

It also lets us get rid of the concept of a file. We don't need files in the cloud; we need data. This data can be stored in files (transparently to us), or in a database (also transparently to us), or in a NoSQL database (also transparently to us).

We don't care where the data is stored, or which container (filesystem or database) is used.

We do care about getting the data back.

I suspect that we will soon care about previous versions of our data.

Windows has add-ins for retrieving older versions of data. I have used a few, and they tend to be "hacks": things bolted on to Windows and clumsy to use. They don't save every version; instead, they keep snapshots at scheduled times.

Look for real version management in the cloud. Google, with its gigabytes of storage for each e-mail user, will be able to keep older versions of files. (Perhaps they are already doing it.)

The "File / Save" command will be replaced with the "File Versions" list, letting us retrieve an old version of the file. The list will show each and every revision of the file, not just the versions captured at scheduled times.


Once a major player offers this feature, other players will have to follow.

Monday, April 15, 2013

The (possible) return of chargebacks

Many moons ago, computers were large expensive beasts that required much care and attention. Since they were expensive, only large organizations could purchase (or lease) them, and those large organizations monitored their use. The companies and government agencies wanted to ensure that they were spending the right amount of money. A computer system had to provide the right amount of computations and storage; if it had excess capacity, you were spending too much.

Some time later (moons ago, but not so many moons) computers became relatively cheap. Personal computers were smaller, easier to use, and much less expensive. Most significantly, they had no way to monitor utilization. While some PCs were more powerful than others, there were no measurements of their actual use. It was common to use personal computers for only eight hours a day. (More horrifying to the mainframe efficiency monitors, some people left their PCs powered on overnight, when they were performing no work.)

Cloud technologies allow us to worry about utilization factors again. And we monitor them. This is a big change from the PC era. With PCs, we cared little for utilization rates.

Perhaps we monitor cloud technologies (virtual servers and such) because they are metered; we pay for every hour that they are active.

If we start worrying about utilization rates for cloud resources, I suspect that we will soon bring back another habit from the mainframe era: chargebacks. For those who do not remember them, chargebacks are mechanisms to charge end-user departments for computing resources. Banks, for example, would have a single mainframe used by different departments. With chargebacks, the bank's IT group allocates the expenses of the mainframe, its software, storage, and network usage to the user departments.

We did not have chargebacks with PCs, or with servers. It was a blissful era in which computing power was "too cheap to meter". (Or perhaps too difficult to meter.)

With cloud technologies, we may just see the return of chargebacks. We have the ability, and we probably will do it. Too many organizations will see it as a way of allocating costs to the true users.

I'm not sure that this is a good thing. Organizational "clients" of the IT group should worry about expenses, but chargebacks provide an incomplete picture of expenses. They are good at reporting the expenses incurred, but they deter cooperation across departments. (This builds silos within the organization, since I as a department manager do not want people from other departments using resources "on my dime".)

Chargebacks also force changes to project governance. New projects are viewed not only in the light of development costs, but also in the chargeback costs (which are typically the monthly operations costs). If these monthly costs are known, then this analysis is helpful. But if these costs are not known but merely estimated, political games can erupt between the department managers and the cost estimators.

I don't claim that chargebacks are totally evil. But chargebacks are not totally good, either. Like any tool, they can help or harm, depending on their use.

Saturday, April 13, 2013

Higher-level constructs can be your friends

Lee Brodie, inventor of the Forth language, once said: "I wouldn't write programs in Forth. I would build a new language in Forth, one suitable for the problem at hand. Then I would write the program in that language."

Or something like that.

The idea is a good one. Programming languages are fairly low level, dealing with small-grain concepts like 'int' and 'char'. Building a higher level of abstraction helps you focus on the task at hand, and worry less about details.

We have implemented this tactically with several programming constructs.

First were libraries: blocks of commonly used functions. All modern languages have "the standard libraries", from C to C++ to Java to C# to Python.

Object-oriented programming languages were another step, tactically. They promised the ability to "represent real-world concepts" in programs.

Today we use "domain specific languages". The idea is the same -- tactically.

I keep qualifying my statements with "tactically" because for all of our efforts, we (the programming industry, as a whole) tend to not use them. Not at a strategic level.

We use the common libraries. Any C programmer has used the strxxx() functions, along with the printf() and scanf() family of functions. C++, Java, and C# programmers use the standard-issue libraries and APIs for those languages. We're good at using what is put in front of us.

But very few projects use any of these techniques (libraries, classes, and DSLs) to create higher levels of objects. Most projects use the basic language and the constructs from the supplied framework (MFC, WPF, Struts, etc.) but build very little above these levels.

Instead of following Leo Brodie's advice, projects take the language, libraries, and frameworks and build with those constructs -- and only those constructs. The code of the application -- specifically the business logic -- is built in language-level constructs. There is (often) no effort in building libraries or classes that raise the code to a level closer to business logic.

This low-level design leads to long-term problems. The biggest problem is complexity, in the form of code that manipulates ideas at multiple levels. Code that calculates mortgage interest, for example, includes logic for manipulating arrays of numbers for payment amounts and payment dates. The result is that code is hard to understand: When reading the code, one must mentally jump up and down from one level of abstraction to another.

A second problem is the use of supplied objects for something similar. In Windows MFC programs, many systems used CString objects to hold directory and file names. This is convenient for the initial programmer, but painful for the programmers who follow. A CString object was not a file name, and had operations that made no sense for file names. (The later .NET framework provided much better support for file and path names.) Here, when reading the code, one must constantly remind oneself that the object in view is not used as a normal object of the given type, but as a special case with only certain operations allowed. This imposes work on the reader.

Given these costs of using low-level constructs, why do we avoid higher-level constructs?

I have a few ideas:

Building higher-level constructs is hard: It takes time and effort to build good (that is, useful and enduring) constructs. It is much easier (and faster) to build a program with the supplied objects.

Constructs require maintenance: Once "completed", constructs must be modified as the business changes. (Code built from low-level objects needs to be modified too, but managers and programmers seem more accepting of these changes.)

Obvious ramp-up time: Business-specific high-level constructs are specific to that business, and new hires must learn them. But with programs built with low-level constructs, new hires can be productive immediately, since the system is all "common, standard" code. (Systems have non-obvious ramp-up time, as new hires "learn the business", but managers seem to accept -- or ignore -- that cost.)

Can create politics: Home-grown libraries and frameworks can create political conflicts, driven by ego or overly-aggressive schedules. This is especially possible when the library becomes a separate project, used by other (client) projects. The manager of the library project must work well with the managers of the client projects.

These are not technical problems. These challenges are to the management of the projects. (To some extent, there are technical challenges in the design of the library/class/framework, but these are small compared to the managerial issues.)

I still like Leo Brodie's idea. We can do better than we currently do. We can build systems with better levels of abstraction.

Wednesday, April 10, 2013

The Future of your Windows XP PC

Suppose you have one (or several) PCs running Windows XP. Microsoft has announced the end-of-life date for Windows XP (about a year from now). What to do?

You have several options:

Upgrade to Windows 8: This probably requires new hardware, since Windows 8 requires a bit more than Windows XP. If you want to use a touchscreen, you are looking at not upgrading your PC system to Windows 8 but replacing all of the hardware.

Windows 8 uses the new "Modern/Metro" UI which is a significant change from Windows XP. Your users may find the new interface unfamiliar.

Upgrade to Windows 7: Like Windows 8, Windows 7 probably requires new hardware. You are replacing your PC, not upgrading it. (Perhaps you keep the monitor, mouse, and keyboard.)

The UI in Windows 7 is closer to Windows XP, but there are still changes. The user experience is quite close to Windows XP; the system administrator will see the changes.

Switch to Mac: Instead of upgrading to Windows 8 or Windows 7, you can switch to an Apple Macintosh PC running OSX. This requires new versions of your software. Now you are replacing hardware and software, hardly a simple upgrade.

The user interface and administration of OSX is different from Windows, another cost of conversion.

Switch to Linux: Instead of upgrading to a version of Windows, you can switch to Linux. This is one option that lets you keep your current hardware. There are several Linux distros that are designed to run on limited hardware.

The Linux UI is different, but closer to Windows than Mac OSX, and can be tuned to look like Windows. Software may or may not be a challenge. The major browsers (except Internet Explorer) run on Linux. LibreOffice can replace Microsoft Office for most tasks. Commodity software be replaced with open source packages (GIMP for PhotoShop, for example). The WINE package can run some Windows applications, so you may be able to keep your custom (that is, non-commodity) software. (Or perhaps not; some software will run only on Windows.)

Keep Windows XP: This option may be missing from some consultant recommendations, but it is a possible path. There is nothing that will prevent you from running your existing hardware with your existing software. Windows XP has no self-destruct timer, and will continue to run after the "end of life" date.

But staying with Windows XP has costs. They are deferred costs, not immediate costs. They are gradual, not sharply defined. It is the "death by a thousand cuts" approach. You can keep running Windows XP, but small things will break, and then larger things.

Here's what will probably happen:

You get no updates from Microsoft, and you don't have to apply them and reboot Windows. You may think that this is an improvement. It is, in that you don't lose time applying updates. The downside is that your system's vulnerabilities remain unfixed.

As other things in your environment change, you will find that the Windows XP system does not work with the new items. When you add a printer, the Windows XP system will not have a driver for the it. When your software update arrives (perhaps for Adobe Acrobat), the update will politely tell you that the new version is not supported under Windows XP. (If you are lucky, the update will tell you this *before* it modifies your system. Less fortunate folks will learn this only after the new software has been installed and refuses to run.)

New versions of browsers will fail to install. Stuck with old browsers, some web sites will give you warnings and complaints. Some web sites will fail in obvious ways. Others will fail in mysterious and frustrating ways -- perhaps not letting you log in, or complete a transaction.

Problems are not limited to hardware and software -- they can affect people, too.

Job candidates, upon learning that you use Windows XP, may decline to work with you. Some candidates may decline the job immediately. Others may hire on and then complain when you direct them to work with a Windows XP system.

Windows XP may be a problem when you look for system admins. Some may choose to work elsewhere, others may accept the job but demand higher rates. (And some seasoned sysadmins may be happy to work on an old friend.)

It may be that Windows XP (and corresponding applications) will act as a filter for your employees. Folks who want newer technologies will leave (or decline employment), folks who are comfortable with the older tech will stay (or hire on). Eventually many (if not all) of your staff will be familiar with older technologies and unfamiliar with new ones.

At some point, you will want to re-install Windows XP. Here you will encounter difficulties. Microsoft may (or may not) continue to support the activation servers for Windows XP. Without an activation code, Windows XP will not run. Even with the activation servers and codes, if you install on a new PC, Microsoft may reject the activation (thinking that you are attempting to exceed your license count). New hardware presents other problems: If the PC uses UEFI, it may fail to boot the Windows XP installer, which is not signed. If the PC has no CD drive, the Windows XP CD is useless.

You can stay with Windows XP, but the path is limited. Your system becomes fragile, dependent on a limited and shrinking set of technology. At some point, you will be forced to move to something else.

My advice: Move before you are forced to move. Move to a new operating system (and possibly new hardware) on your schedule, not on a schedule set by failing equipment. Migrations take time and require tests to ensure that the new equipment is working. You want to convert from Windows XP to your new environment with minimal risks and minimal disruptions.

Sunday, April 7, 2013

Mobile/cloud apps will be different than PC apps

As a participant in the PC revolution, I was comfortable with the bright future of personal computers. I *knew* -- that is, I strongly believed -- that PCs were superior to mainframes.

It turned out that PCs were *different* from mainframes, but not necessarily superior.

Mainframe programs were, primarily, accounting systems. Oh, there were programs to compute ballistics tables, and programs for engineering and astronomy, and system utilities, but the big use of mainframe computers was accounting (general ledger, inventory, billing, payment processing, payables, receivables, and market forecasts). These uses were shaped by the entities that could afford mainframe computers (large corporations and governments) and the data that was most important to those organizations.

But the data was also shaped by technology. Computers read input on punch cards and stored data on magnetic tape. The batch processing systems were useful for certain types of processing and made efficient use of transactions and master files. Even when terminals were invented, the processing remained in batch mode.

Personal computers were more interactive than mainframes. They started with terminals and interactive applications. From the beginning, personal computers were used for tasks very different than the tasks of mainframe computers. The biggest applications for PCs were word processors and spreadsheets. (They still are today.)

Some "traditional" computer applications were ported to personal computers. There were (and still are) systems for accounting and database management. There were utility programs and programming languages: BASIC, FORTRAN, COBOL, and later C and Pascal. But the biggest applications were the interactive ones, the ones that broke from the batch processing mold of mainframe computing.

(I am simplifying greatly here. There were interactive programs for mainframes. The BASIC language was designed as an interactive environment for programming, on mainframe computers.)

I cannot help but think that the typical mainframe programmer, looking at the new personal computers that appeared in the late 1970s, could only puzzle at what possible advantage they could offer. Personal computers were smaller, slower, and less capable than mainframes in every degree. Processors were slower and less capable. Memory was smaller. Storage was laughably primitive. PC software was also primitive, with nothing approaching the sophistication of mainframe operating systems, database management systems, or utilities.

The only ways in which personal computers were superior to mainframes were the BASIC language (Microsoft BASIC was more powerful than mainframe BASIC), word processors, and spreadsheets. Notice that these are all interactive programs. The cost and size of a personal computer made it possible for a person to own one, but the interactive nature of applications made it sensible for a person to own one.

That single attribute of interactive applications made the PC revolution possible. The success of modern-day PCs and the Microsoft empire was built on interactive applications.

I suspect that the success of cell phones and tablets will be built on a single attribute. But what that attribute is, I do not know. It may be portability. It may be location-aware capabilities. It may be a different level of interactivity.

I *know* -- that is, I feel very strongly -- that mobile/cloud is going to have a brilliant future.

I also feel that the key applications for mobile/cloud will be different from traditional PC applications, just as PC applications are different from mainframe applications. Any attempt to port PC applications to mobile/cloud will be doomed to failure, just as mainframe applications failed to port to PCs.

Mainframe applications live on, in their batch mode glory, to this day. Large companies and governments need accounting systems, and will continue to need them. PC applications will live through the mobile/cloud revolution, although some may fade; PowerPoint-style presentations may be better served on synchronized mobile devices than with a single PC and a projector.

Expect mobile/cloud apps to surprise us. They will not be word processors and spreadsheets. (Nor will they be accounting systems.) They will be more like Twitter and Facebook, with status updates and connections to our network of people.

Thursday, April 4, 2013

Means of production and BYOD

In agrarian societies, the means of production is the farm: land of some size, crops, livestock, tools, seeds, workers, and capital.

In industrial societies, the means of production is the factory: land of some size, a building, tools, raw materials, power sources, access to transportation, and capital.

In the industrial age, capitalists owned the means of production. These things, the means of production, cost money. To be successful, capitalists had to be wealthy.

But the capitalists of yore missed something: You don't need to own all of the means of production. You need only a few key parts.

This should be obvious. No company is completely stand-alone. Companies outsource many activities, from payroll to the generation of electricity.


Apple has learned this lesson. Apple designs the products and hires other companies to build them.

The "Bring Your Own Device" idea (BYOD) is an extension of outsourcing. It pushes the ownership of some equipment onto workers. Instead of a company purchasing the PC, operating system, word processor, and spreadsheet for an employee, the employee acquires their own.

Shifting to BYOD means giving some measure of control (and responsibility) to employees. Workers can select their own device, their own operating system, and their own applications. Some businesses want to maintain control over these choices, and they miss the point of BYOD. They want to dictate the specific software that employees must use.

But, as a business owner who is outsourcing tasks to employees, do you care about the operating system they use? Do you care about the specific type of PC? Or the application (as long as you can read the file)?

BYOD is possible because the composition of documents and spreadsheets (and e-mails, and calendars) is not a key aspect of business. It's the ideas within those documents and spreadsheets that make the business run. It's the information that is the vital means of production.

For decades we have focussed on the hardware of computing: processor speed, memory capacity, storage size. I suppose that it started with tabulating machines, and the ability of one machine to process cards faster than another vendor's machine. It continued with mainframes, with minicomputers, and with PCs.

BYOD shows us that hardware is not a strategic advantage. Nor is commodity software -- anyone can have word processors and spreadsheets. Any company can have a web site.

The advantage is in data, in algorithms, and in ideas.