Thursday, July 28, 2011

Virtual companies

Some companies consider themselves "virtual". That is, they have no central office, no single place of performing work. They have no permanent office. Employees perform work in their own homes, or in any convenient location, from the local coffee shop to a co-working site.

Not every company can be a virtual company. The assembly of physical goods (like automobiles or lamps) requires that people work in a factory. Taxi drivers must be in taxis. Plumbers must visit the site of the plumbing.

Virtual companies are not so much virtual as they are distributed. The employees (and work performed) is distributed in various locations. This leads to some interesting changes in the management of companies.

The companies that can be distributed are the ones that deal with information, with bits that can be easily transported to different locations. Software development is an obvious candidate; others include accounting, insurance claim processing, banking, X-ray analysis, product support, and economic analysis.

First is the obvious technology changes. Employees must have the equipment and skills for communication and coordination of effort. Telephones, e-mail, instant messaging, video conferencing are necessary communication tools. Collaboration tools such as Google Docs are needed, too.

There are changes in the hiring employees. In the old (central location) style of a company, candidates would be drawn from a pool of local talent. Some companies may choose to re-locate people to their area, once an offer was made and accepted. Other companies won't. In contrast, the new (distributed) style allows for people to work from any location.

Payroll is affected: If I am in one state and hire a person living in another state, what taxes are paid? More specifically, where is the work performed?

More than these, there is the question of performance review. Many shops have some form of performance review, and larger shops have elaborate and bureaucratic processes. Despite the forms and meetings, many evaluations by managers are reduced to the managers opinion of the employee, and that opinion is often guided by the employee's arrival and departure times.

With a distributed workforce, a manager cannot simply walk past cubes to see who is working late.

Some managers may resort to tricks, such as sending requests at 4:55 (or five minutes prior to the employee's shift end, in whatever time zone) to see if the employee responds. Some companies may set up arrangements to "clock in" and "clock out". I suspect that these tricks will annoy everyone involved without contributing to productivity.

In a distributed company, managers will have to measure employees based on their contributions -- which is probably what they should have been doing all along. This may be difficult for some managers, as they will have to learn about their employees' tasks and deliverables. It may be difficult for employees, since they will have to explain their work to their supervisors (and also not boost their score by simply working late two nights a week).

Distributed work is here. Some companies are using it, to their advantage. They have a larger pool of talent from which to draw. Limiting your talent to "the locals" is convenient (and follows the tradition) but may not be good enough to compete in the future.

You can choose to use distributed work, or you can choose to ignore it. But you must choose.

Open source drives innovation

A long time ago, in a galaxy far, far away... there was a great battle for software. The empires of closed-source software were challenged by a loosely-connected alliance of open source rebels. Mighty and brave were the deeds of many in the alliance.

Open source has succeeded. It is no longer the rebel alliance. It won, in the sense that it is established and legitimate in the eyes of individuals, businesses, and governments. (It did not achieve world domination; if that was your goal then open source still has work to do. But that is another topic.)

If open source is not the rebel alliance, if it is not the subversive movement pushing for change, then what is it?

I believe it to be the research arm of the software industry. It is the laboratory in which new technologies are developed.

Consider these developments, all from open source and not from the "industry" as we know it:

Agile development: The polar opposite of the "big design up front" techniques used by traditional development shops. It was necessary for open source projects, since they lack the command-and-control structure necessary to implement BDUF.

NoSQL databases: Data stores for non-structured data. Big organizations have spent oodles of time trying to structure their data, just as they try to structure everything else. Open source has built memcached, CouchDB, and other approaches to data stores.

Scripting languages: Starting with the Unix shell scripts, through Perl, Python, and Ruby, open source has given us these languages. Closed source has given us Java, C#, and a new version of C++.

Distributed version control: Closed source was content with centralized version control. Only open source projects gave us Git and the idea of a distributed version control system.

Distributed projects: Open source runs on volunteers (usually) and it makes little sense to force contributors into a single location. Open source projects have contributors from around the world, virtual organizations that use the best available people.

Open source is still pushing change. Not merely pushing for change, but driving change. It is driving change at the technology level, which is why I consider it the research arm of the software industry.


Monday, July 25, 2011

The incredible shrinking program

Computer programs have been shrinking. They have been doing so since the beginning of the computer age. You may think this claim strange, given that computers are much bigger than earlier eras and programs certainly look bigger, with their millions of lines of source code. And you are right -- computer programs have gotten bigger. Yet they have also gotten smaller.

In absolute terms, computer programs are larger. They have more lines of source code, larger memory footprints, and greater complexity.

Relative to the size of the computer, however, computer programs are smaller.

The earliest computers were programmed with plugboards, so the "software" was hardware. For these computers, there was only a thin boundary between the machine and the program, so in a sense the program was the machine.

Computers in the 1940s and 1950s were programmable in the sense we have today, with the hardware and the software being two different kinds of things. The program was loaded into memory and executed, often by an operator. While running, the program was the only thing in memory -- there was no operating system or monitor. The program had to perform all tasks, from input to processing to output.

With the 1960s we saw the advent of operating systems. The operating system contained common functions and the program called the operating system to perform actions. The model of hardware, operating system, and application program was a solid one, and continues to this day.

But notice that the program is now smaller than the machine. The application program is constrained by the operating system. Combined, the program and operating system fill the machine. (For time-sharing systems, the combination is the operating system and all running programs.) Thus, the program has become smaller, giving some functions over to the operating system.

Microcomputers followed this path. The earliest microcomputers (the Altair, the IMSAI, and others of the late 1970s) were processors, memory, and front panels that allowed a person to load a program and run it. This was the same model as the 1940s electronic computers. Hobbyists quickly developed "disk operating systems" although one can argue that the first true operating systems were IBM's OS/2, Microsoft's Windows NT, and variants of Unix, all of which needed the Intel 80386 processor to be effective. As with the mainframe path, the application shrunk and gave up processing to the operating system.

The iOS and Android operating systems extend this trend. Programs are even smaller under these operating systems, yielding more control and code to the operating system.

In the earlier model, the operating system launches a program and the program runs until completion (by signalling the operating system that it is finished). There is one entry point for the program; there can be many exit points.

The model used by iOS and Android is different. Instead of starting the program and letting it run to completion, the operating system issues multiple calls to a program. Instead of a single entry point, a program has multiple (well-defined) entry points. Anyone familiar with event-driven programming will recognize the similarity: Windows sends programs messages about various events (to a single entry point) and the program must fan them out to separate routines. iOS and Android had removed the top layer of that program and fan out the messages themselves.

Thus, programs have once again yielded code to the operating system, and have shrunk in size.

I think that this is a good change. It removes "boilerplate" code from applications and puts the multiple copies in a single place. Application code can focus on the business problem and ignore an infrastructure issue. Programs gain a measure of uniformity.

The model of the operating system lasted for fifty years (twenty in the PC world) and served us well. I want to think of the new model as something different: an operating system with pluggable tasks. I think it will serve us for a long time.

Sunday, July 24, 2011

The tablet revolution

When Apple introduced the iPad, I was not impressed.

I was comparing the iPad to e-readers, specifically the Kindle and the Nook. And for reading, I think that the Kindle and other e-readers are superior to the iPad.

But as a general computation device, the iPad (or tablets in general) are superior not only to e-readers but also to traditional desktop PCs and laptop PCs. Significantly superior. Superior enough to cause a change in the ideas about computing hardware.

The iPad and tablets (and iPods and cell phones) use touchscreens, something that traditional computers have avoided. The touchscreen experience is very different from the screen-and-mouse experience. The touchscreen experience is more intimate and more immediate; the mouse experience is clunky. Why should I have to drag the mouse pointer all the way over to a scroll bar when I can simply reach out the drag the screen?

Apple has, once again, defined the modern computing interface. Apple defined the "mouse" UI with the Lisa and then the Macintosh computers, stealing a bunch of ideas from Xerox (who in turn had lifted them from Doug Englebart).

The mouse interface was introduced in 1983, and it took a while to become popular. Once it became the standard, it was *the* standard way to interact with computers. The introduction of the iPhone/iPod/iPad interface set a new standard, one that is being adopted quickly. Apple is moving the interface to the newest version of OSX (the "Lion" release) and Microsoft is doing the same thing with its "Metro" interface for Windows 8.

The new interface expects a touchscreen. While some folks may try to fudge the interface with a plain display and a mouse, I believe that we will see a fairly rapid conversion to touchscreens.

Converting the hardware is the smaller of the two problems. The bigger problem is the software. Our current software is designed for the mouse interface, not the touch interface. Apple and Microsoft may craft solutions that let older (soon to be derided as legacy) apps to run in the new environment, but there will be some apps that fail to make the transition.

The conversion of software will also give new players an opportunity to take market share. We may see Microsoft lose its dominant position with Office.

I expect two, parallel tracks of acceptance: the home user and the business user. Home users will adopt the new UI quickly -- they have already done so on their cell phones, so changing the operating system to look like a cell phone is probably viewed as an improvement.

Business users, on the other hand, will face a number of challenges. First will be the cost of upgrading equipment with touchscreens. Related to that will be the training issues (probably minimal) and the politics associated with the distribution of the new equipment. If a company must roll out new equipment in phases (perhaps over several years) there will be squabblings over the selection of employees to get the new hardware.

Businesses also have to integrate the new hardware and software into their organization. New hardware can be adopted quickly; new software takes longer. The support teams must learn the software and the methods for resolving problems. The new software must be configured to confirm to existing standards for security, disaster recovery, and data retention. New versions of apps must be acquired and rolled out -- but only to folks with the new equipment.

The fate of developers is hard to predict. The new user interfaces have proven themselves in the consumer space. I suspect that they can work in the business space. I'm unsure of their soundness for developers. Our programming environments, tools, and even languages are designed for keyboards, not swipes on a screen. What kinds of computers will programmers use?

One possibility is that developers will use tradition-style PCs, with tradition keyboards and traditional operating systems. But this will put PCs into a niche market (developers only) and drive the prices up. Developers have for a long time been riding on the wave of popular (low-priced commodity) equipment; I'm not sure how they will adapt to a premium market.

Another possibility, albeit small, is that developers will develop new languages that fit into the new user experience. This is not precedented: BASIC was designed to fit into timesharing systems and Visual Basic was designed for programming in Windows.

Either way, it will be interesting.

Wednesday, July 20, 2011

The collision of cloud and cryptography

We have an impending collision: cryptography and cloud computing.

The big problem with most cryptographic systems is the "key problem". Most systems use a "symmetric key", in which the same key is used to encode and decode the message. This presents a problem: for me to send you an encoded message, I must first send you the key. But anyone who has the key can decode the message, so how to send you -- confidentially -- the key? I cannot send it as a message, because either it is encrypted or it is not. If I send the key encrypted, then you cannot decrypt it, because you need the key! If I send it unencrypted, then anyone who sees the message (including intermediate e-mail servers) have access to the key and can decrypt any successive, encrypted message.

This problem was solved (or so we thought) by asymmetric keys. With asymmetric keys, two keys are used: one to encrypt and a different one to decrypt. Technically, one key is the "private key" and the second is the "public key". The two work together, and while either can be used to encrypt (with the other used to decrypt), the two are a matched set. Only the other key in the pair can decrypt a message encrypted with the other. The private key is kept private and the public key is published to the world.

The asymmetric key approach allows for flexibility. If we both have a pair of keys, I can use your public key (which is readily available, since it is public) and only you can decrypt the message. Thus, I know only you can read the message. You can send me a confidential message by using my public key; only I can decrypt the message.

Tangentially, I can encrypt the message with my private key, and when you decrypt it with my public key, you know that the message is from me. No one else can encrypt a message that is decryptable with my public key, because they do not have the private key.

The nature of the private key is the problem with cloud computing. The cryptography system relies on a private key, a number that you keep, well, private. It remains in your domain and you must not reveal it to anyone. But cloud computing pulls computing away from a person and into "the cloud", into a system of provisioned servers that are outside of my domain.

Let's look at a simpler case: e-mail. The same concepts apply to cloud systems.

If I want to encrypt my e-mail messages, the process is fairly easy. I can program my private key into my e-mail client, and the e-mail client will use that key to encrypt all outbound messages. (Sophisticated e-mail clients allow me to enable and disable encryption on each message I send.) This all works, as long as the e-mail client is on my equipment -- remember, one cannot give the private key to anyone else.

To use encryption with a web-based e-mail system, I would have to give the e-mail system my private key. (I have to use the private key for encryption, since the public key -- the other key -- must be used for decryption. My recipients cannot have my private key, since it must remain private.) But if I give my private key to the e-mail system (whether it be GMail, Hotmail, or Yahoo! mail) then someone other than me has my private key! This is unacceptable to the encryption strategy.

For web-based e-mail, the best I can do is 1) compose the e-mail, 2) copy it to my local system, 3) encrypt the message on my local system, and 4) paste the encrypted message into the e-mail client. This allows my to keep my private key in my local domain and send encrypted messages.

You can see that this solution is a kludge, and while it works for e-mail it fails miserably for cloud applications. (Cloud apps do not always allow for the cutting and pasting of data at the appropriate points in processing.)

With the current technology of cryptography, there is no way around this dilemma. I must keep my private keys private, and use them on my equipment. (That is, equipment that is completely under my control.) To use encryption with cloud applications, I need a a local processor, a mechanism to ship data to my local processor, the ability to encrypt the data, and a means to send the encrypted data back to the cloud application.

I don't see anyone working on any of these mechanisms. Sadly, the cloud was designed with no thought for personal encryption. (Cloud apps can encrypt data between themselves, but that the is application encrypting the data with whatever means -- if any -- and only good for other cloud applications. It does not handle encryption at the user level.)

Encryption (or more specifically, the "public key infrastructure" or PKI) is useful for sending confidential messages (messages that only the recipient can read) and signed messages (messages that anyone can read but are guaranteed to be from me). We need these abilities, especially for commerce. Yet cloud computing works against these goals. Cloud computing, with its "move everything to someone else's servers" is at odds with the "private key on my processor" requirement of PKI.

And that is the conflict between cloud computing and encryption.

Saturday, July 9, 2011

The Nixon of programming languages

Do we need a language to kick around?

It seems that we do. From the earliest days of computing, people have been critical of specific programming languages.

Those who had learned machine language and assembly code were skeptics of FORTRAN and horrified at the output of the COBOL compiler.

When I joined the field, BASIC, Pascal, and C were in the ascendant, yet each had their fans. In the microcomputer arena, BASIC was dominant and thus admired by many and despised by many (with some folks living in both camps). Pascal and C had their followers, and there were other languages for the explorers (CBASIC and Forth). The clear winner in the "most despised" race was BASIC.

In the golden age of Microsoft Windows, the dominant languages were Pascal (briefly), followed by C, and then a tussle between Visual Basic and C++. Both Visual Basic and C++ were liked and disliked, with strong loyalties.

Sun and Microsoft introduced Java and C#, which pulled people away from the Visual Basic and C++ arena and into a new, complex dispute. The argument of language superiority was clouded by the assets of the run-time system and the backing vendor. To this day, people have strong preferences for one over the other.

Today we see discussions, with new languages Scala, Clojure, and Lua compared to Python, Ruby, and Java. But these discussions are less heated and more educational. They are civilized discourse.

My theory is that we use languages as a proxy for independence, and our arguments are not about language or compiler but about our ability to survive. Using FORTRAN or COBOL meant committing to IBM (despite the portability of the languages), and people feared IBM.

In the microcomputer age, programming in BASIC meant committing to Microsoft, but the relationship was complex. Microsoft owned the language, but Digital Research owned CP/M (the de facto standard operating system), so we had two brutes to fear.

Now that Oracle has purchased Sun and acquired Java, I expect the Java/C# disputes to increase. Sun was the rebel alliance against the imperial Microsoft, but Oracle is a second empire. Both can threaten the independence of smaller organizations.

I also expect that more people will be kicking Java. Those who want independence will look at newer languages; those who want security will look to Java or C#. It may be a imagined security, since the vendor can pull the syntactical rug out from under your project at any time; consider changes in Visual Basic over time.

The new languages have no large empire behind them. (Yes, Scala and Clojure live in the Java run-time, but they are not viewed as tools of the empire.) With no bogeyman behind them, there is little reason to castigate them. They have little power over us.

It is the power of large vendors that we fear. So yes, as long as we have large vendors backing languages, there will be languages to kick around.

Wednesday, July 6, 2011

The increasing cost of C++ and other old tech

If you are running a project that uses C++, you may want to think about its costs. As I see it, the cost of C++ is increasing.

The cost of compilers and libraries is remaining constant. (Whether you are using Microsoft's Visual Studio, the open source gcc, or a different toolset.)

The increasing cost is not in the tools, but in the people.

First and most obvious: the time to build applications in C++ is longer than the time to build applications in modern languages. (Notice how I have not-so-subtlely omitted C++ from the set of "modern" languages. But modern or not, programming in C++ takes more time.) This drives up your costs.

Recent college graduates don't learn C++; they learn Java or Python. You can hire recent graduates and ask them to learn C++, but I suspect few will want to work on C++ to any large degree. The programmers who know C++ are the more senior folks, programmers with more experience and higher salary expectations. This drives up your costs.

Not all senior folks admit to knowing C++. Some of my colleagues have removed C++ from their resume, because they want to work on projects with different languages. This removes them from the pool of talent, reducing the number of available C++ programmers. Finding programmers is harder and takes longer. This drives up your cost.

This affect is not limited to C++. Other "old tech" suffers the same fate. Think about COBOL, Visual Basic, Windows API programming, the Sybase database, Powerbuilder, and any of a number of older technologies. Each was popular in their heyday; each is still around but with a much-diminished pool of talent.

When technologies become "old", they become expensive. Eventually, they become so expensive that the product must either change to a different technology set or be discontinued.

As a product manager, how do you project your development and maintenance costs? Do you assume a flat cost model (perhaps with modest increases to match inflation)? Or do you project increasing costs as labor becomes scarce? Do you assume that a single technology will last the life of the product, or do you plan for a migration to a new technology?

Technologies become less popular over time. The assumption that a set of unchanging technology will carry a product over its entire life is naive -- unless your product life is shorter than the technology cycle. Effective project managers will plan for change.

Friday, July 1, 2011

Looking out of the Windows

According to this article in Computerworld, Microsoft is claiming success with Internet Explorer 9, in the form "the most popular modern browser on Windows 7", a phrase that excludes IE6 and IE7.

This is an awkward phrase to use to describe success. It's akin to the phrase "we're the best widget producer except for those other guys who produce widgets offshore".

I can think of two reasons for Microsoft to use such a phrase:

1) They want a phrase that shows that they are winning. Their phrasing, with all of the implied conditions, does indeed show that Microsoft is winning. But it is less than the simple "we're number one!".

2) Microsoft views success through Windows 7, or perhaps through currently marketed products. In this view, older products (Windows XP, IE6) don't count. And there may be some sense in that view, since Microsoft is in the business of selling software, and previously sold packages are nice but not part of this month's "P&L" statement.

The former, despite the dripping residue of marketing, is I think the healthier attitude. It is a focussed and specific measurement, sounds good, and pushes us to the line of deception without crossing it.

The latter is the more harmful. It is a self-deception, a measurement of what is selling today with no regard for the damage (or good) done yesterday, a willful ignorance of the effects of the company on the ecosystem.

The bottom line is that Internet Explorer is losing market share. This may not be the doom that it once was, with apps for iPhone and Android phones increasing in popularity. Indeed, the true danger may be in Microsoft's inability to build a market for Windows Phone 7 and their unwillingness to build apps for iOS and Android devices.