Tuesday, September 28, 2010

Measuring chaos

Some IT shops are more chaotic than others. (A statement that can be made about just about every discipline, not just software.)

But how to compare? What measurement do we have for chaos, other than anecdotes and comments made by the inmates?

Here's a metric: The frequency of priority changes.

How frequently does your team (if you're a manager) or your workload (if you're a contributor) change its "number one" task? Do you move gradually from one project to another, on a planned schedule? Or do you change from one "hot" project to the next "hotter" project, at the whim of external forces. (In other words, are you in fire-fighting mode? Or are you starting preventative fires with a plan that lets you control the unexpected ones?)

I call it the "PCF" - the "Priority Change Frequency". The more often you change priorities, the more chaotic your shop.

Organizations with competent managers have lower PCF values. Their managers plan for new assignments and prepare their teams. Organizations with less-than-competent managers have higher PCF values, since they will often be surprised and ill-prepared for the work.

But comparing teams is tricky. Some teams will have higher PCF values, due to the nature of their work. Help desks and support teams must respond to user requests and problems. Operations teams must "keep the joint running" and respond to equipment failures. In these types of organizations, a problem can quickly become the number one priority. Changing priorities is the job.

Other teams should have lower PCFs. A programming team (provided that they are not also the support team) has the luxury of planning their work. Even teams that use Agile methods can plan their work -- their priorities may shift from one cycle to another, but not during the cycle.

Don't be seduced by the illusion of "we're different". Some teams think that they are different, and a high PCF is not an indication of a problem. Unless your work is defined by external sources that change frequently (as with help desks), you have no excuse. If you insist on being different, then the difference is in the manager's skills.


Sunday, September 26, 2010

Programming is more than coding

Programming is more than simply writing code and getting it to "work". Computers are calculators, non-thinking entities that perform specified instructions in a given sequence. The task of programming requires the translation of a desired function into the machine code for the processor and the data storage scheme. The instructions must be specific and precise.

The desired function is often a high-level description of the problem. The task of programming converts that fuzzy, high-level description into the detailed instructions for the computer.

For a single programmer, the task is what we normally think of as programming. He (and it is most often a male) must write the program to perform the desired function. In doing so, the programmer converts the statement of the request (sometimes as informal as "change the discount to exclude items already on sale") into the language of the computer. 

The typical large IT development shop, with multiple programmers and larger systems, divides the tasks of requirements, design, and coding across multiple teams. Some shops have an early fourth phase of business initiatives. The first phase creates documents with a description of the desired functionality. Each successive phase accepts documents and provides documents with a higher degree of specificity. Each phase resolves ambiguities, "tightening" the solution. By the coding phase, the ambiguities have been removed. Indeed, they must be resolved, as the computer cannot use ambiguous instructions.

We have made great strides improving the tail end of this process.

Decades ago, programmers specified instructions in machine code. This code was the instruction set of the processor, and the coding was done in numeric values. The technique was tedious and required painstaking detail to attention.

We quickly moved from that technique to assembly language, which allowed programmers to use alphanumeric symbols instead of numeric codes. A step up -- and a big step -- but coding was still tedious and error-prone.

Today, programmers use high-level programming languages to provide the instructions to the computer. Early high-level languages were COBOL and FORTRAN, and modified versions of these languages are used to this day. COBOL and FORTRAN are by no means the only high-level languages. Many have been created in the decades of computing, ranging from AutoCoder to RPG, Algol to Pascal, BASIC to Visual Basic, and C to C++ and C#. These languages provide powerful constructs for programmers to express concepts concisely and precisely.

These languages reduce the effort for the tail end of the development process: coding. They do nothing for the earlier stages: initiatives, requirements, and design. Much of the work for systems design occurs in these phases, and most of that work is resolving ambiguities. The requirements and design phases (and business initiative phases, for those that have them) are most often written in English, a language that allows (and some say relies on) ambiguity.

Wouldn't it be nice to expand the precision beyond the coding phase and into the design and requirements phases? James Martin, in his 1985 book "System Design from Provably Correct Constructs" attempted that very thing. The technique was not adopted by the industry.

It failed, not because it was wrong, or expensive, or bizarre, but because it required disciplined and organized thinking for the specification of systems. The analysts and designers that create the requirements documents and design documents work in an environment that allows ambiguities. The English language prevents enforcement of specificity at the level that programmers must work. (There is no compiler for English, no list of syntax errors.) James Martin attempted to replace English with a precise language named HOS.

One can view the technique advocated by James Martin as a "very high level" programming language, but a programming language nonetheless. As a programming language, it enforced discipline in thinking and careful, precise specification and organization of ideas. It is this discipline that makes programming hard. HOS failed not because it was broken or imperfect, but because it was still programming and no one recognized the need for precise thinking.

The recent UML notation is another attempt to bring precision to requirements and design. It has received more attention than James Martin's HOS, and may succeed, but only if we recognize the need for disciplined thinking. In short, we have to convert requirements analysts into programmers.

When UML is accepted as a notation for requirements (or even design), and the people creating the UML documents can create those documents with exact specificity and precision, they shall be the new programmers.

Just as assemblers eliminated the need for machine-level programmers, and high-level languages eliminated the need for assembly programmers, UML will eliminate the need for high-level language programmers. (Mostly. Just as we still have a few machine-level programmers and a few assembly language programmers for specialized tasks, so will we need a few -- but only a few -- programmers who understand the "old" languages of COBOL, Pascal, C++, or C#.)

Programming isn't about one language or its syntax. It's not about parse trees, or data structures, or compiler optimizations. It's about thinking precisely.



Wednesday, September 22, 2010

Ada as a forecast

When one looks back on the history of languages, the question of Ada arises.

Ada was (is?) a language developed in the late 1970s and adopted by the US Department of Defense as its standard language. Yet Ada was not adopted by commercial interests (outside of those connected with the US Department of Defense) and it has been mostly ignored by "mainstream" and hacker communities.

There are a few reasons.

Ada was a large and complex language. Strongly typed and object-oriented, Ada was in the minds of most hobbyists , a "better PL/1". Unfortunately for Ada (and the DoD), no one wanted a better PL/1. (Or even the original PL/1, for that matter.)

Perhaps the biggest factor in Ada's failure to launch was the lack of support from Microsoft and Borland. Both vendors offered C, C++, and their private languages of Visual Basic and Delphi. Neither offered an Ada compiler or development environment.

I dislike giving vendors (especially Microsoft) that much power to shape the market, yet the results are clear: Ada was a still-born language for microcomputer users, and we adopted BASIC, C, and Pascal (later C++ and Visual Basic) for our projects.

If the "vendor effect" is real, then I espect the next decade to be shaped by the languages that are currently supported by the major vendors. For Microsoft, that is C#; for Apple, Objective-C; for Google, Java and Python; for Linux, C++, Perl Python and Ruby. Other languages (LISP, Scheme, Haskell, Erlang, and F#) will have a difficult time.

My preference is an environment with a number of languages, more than just the "one for Microsoft, one for Apple, one for Google, and one for Linux" mode, yet I am afraid that the "one language for your platform" is what we will get.

And Ada will not be among any of them.


Saturday, September 18, 2010

When you run out of secrets

The business of recruiting is often built on secrets. Specifically, the secret is the hiring company. Companies hire recruiters to find candidates; recruiters advertise the openings but omit the specifics of the company, and applications must deal with the recruiter to find a position. Applicants don't know the name of the hiring company, and this is the secret upon which recruiters build their business.

Recruiters have to keep the company name secret. If they advertised the company name, applicants could go directly to the hiring company and apply. Worse, other recruiters would know about the position and start sending applicants to the hiring company.

The business model is based on the secrecy of the hiring company. Without it, the business fails.

And the secrecy of hiring companies is fading. Search engines make it easy to identify companies. For example, the following is text I received from a recruiter:

I have a client in Owings Mills who is looking for a QA Manager to join the team full time. This is a permanent position with a great company. This company specializes in weight loss products and is doing very well.

This traditionally proffered description contains enough information to identify the hiring company . (Perhaps not with complete accuracy, but good enough in most cases.)

Fortunately for recruiting companies, few applicants use search engines to go directly to hiring companies. And they may not, as they have little incentive. Instead, the hiring companies may use search engines and networking sites to find applicants. Companies do have an incentive: elimination of recruitment fees.

But bypassing recruiters places a burden on the hiring company. They must search for applicants and evaluate them. They may choose to follow a potential hire for some time, waiting for the right opening within the organization. These efforts must be performed by a person. If they assign the task to an employee, they have effectively created an internal recruiting organization. If they soutsource it, they have hired a recruiting firm, albeit one that uses different techniques to find people.

I expect that change will occur slowly, and we will keep the current phrasings of advertisements and recruiting messages. They will be the "proper form" of communication, kept in use by tradition and habit.

Until an upstart creates a new business model.


Tuesday, September 14, 2010

The "break" button is no more

In the good old days, computers had terminals (clunky ASR-33 Teletypes, or compact LA-34s from DEC, or inexpensive ADM-3As, or the geeky VT-52 with a "bell" that sounded like the grinding of a poorly shifted transmission) and the terminals had buttons labelled "break".

The break button was needed for serial communication lines. We've moved beyond them, using high-speed network connections, and the break buttons (and the terminals too) are things of the past.

We've also lost "break" in another meaning: the more common meaning applied to a device or mechanism that fails to function due to a fault or defective component.

Things break less frequently than they did in the past. What called my attention to this phenomenon was the original version of "Casino Royale", the James Bond movie from the late 1960s. In the movie, there is a car chase (obligatory in a James Bond movie) which involves three cars. Our hero is in one, an evil spy is in a second, and the third car is operated via remote control. (Quite advanced for the 1960s!)

The evil spy's car has a built-in two-way radio, and the third car is controlled via radio by the the same evil spy network. The remote control car has a camera, so the remote control operator can see the front view of the car.

Near the end of the car chase, the two-way radio in the spy's car breaks, and the camera in the remote control car breaks. Bad things happen (to the evil spy) because of the failures. 

The dialog has little to say about these breakages -- just enough, in fact, to let us know that they are broken. And much less than one would expect in a movie of today.

My theory is that audiences of the late 1960s were accepting of the notion of broken devices. So much so that the moviemakers could provide minimal dialog to explain the events, with the expectation that people would understand the failures and "keep up" with the story.

(The contemporary Star Trek series also had many failures, mostly the transporter.)

In contrast, today's movies rarely show things breaking. Devices and gizmos may be deliberately damaged (usually by sword-wielding ninjas or blaster-toting space mercenaries) but things don't break on their own. And people expect things to not break on their own, either in the movies or in real life.

Which means that people expect things to work, that is, to perform as expected. (Until perhaps, attacked by ninjas or mercenaries.) Not just hardware, but software. People expect their PCs to work, their smartphones to work, and the web to work.

When something doesn't work (that is, it's broken), people become frustrated and angry.

If you want happy customers, you had best make sure that your software doesn't break.


Sunday, September 12, 2010

Enforcing licenses wll help open source

A recent ruling confirms the legality of software license agreements (or EULAs). While initially greeted with regret (by developers, not the big corporations that sell software) this may be a good thing for open source.

License agreements restrict various uses of software, including copying, distribution, transfer, resale, and renting. They can be very confining: some Microsoft licenses for Windows specify that if you sell the computer, the license for Windows is destroyed and the new owner must acquire a new license.

Putting aside the issues of first sale doctrine, practicality of enforcement, and fairness, let's look at the issue from the world of open source. Linux, Apache, Perl, Python, Ruby, GIMP, and other open source packages are distributed under open source licenses. The maintainers aren't worried about people copying their software.

If anything, strict enforcement of licenses may help open source. Let's take the example of the "Windows is good for only one owner" license. Without enforcement, I would sell an old PC to you, and leave Windows on the PC. You would get the PC and the Windows (and possibly other software) and start using it.

With strict compliance, the situation is different. I sell you the PC but I remove all software including Windows. You get a PC with no software. You now have a choice: what operating system and software to install? You might purchase Windows (excuse me, you might license Windows) and install it, or you might install Linux instead.

For older PCs, those which don't meet the requirements for Windows 7, you really have no choice. Microsoft has all but discontinued the distribution of Windows XP, so you can't (leagally) use any version of Windows. Your choice is Linux or nothing.

The enforcement of license agreements that impose restrictions on resale and transfer of software will increase the equilibrium towards open source software.


Tuesday, September 7, 2010

Microsoft MEF contains a surprise

Microsoft's Managed Extensibility Framework (MEF) is a shiny new technology for building applications that bind to components at run-time. The stated purpose is to let you build flexible applications and change them at run-time.

I think there is more to it.

First, a short overview of MEF:

The Managed Extensibility Framework is a library that lets you build components and applications and connect the two at run-time. (Kind of like explicitly loading a DLL in the old Windows days.) You must add several assemblies and code to your application, and you must specifically design your components to plug in to the MEF way of doing things.

MEF uses the concept of catalogs, collections of assemblies that can be loaded for your application. You request an assembly by instantiating an object of an exported type (with some attribute syntax in your code) and MEF searches the catalog for the assembly that matches your request. You have, of course, populated your catalog with assemblies, each with the proper attribute syntax to describe their exported classes and methods.

Catalogs may be included in your executable, or as a separate component, or even accessed across the internet, so (I think) you can distribute a small application file that connects to your server for the special bit of logic that only you have. More on this later.

Microsoft advertises the Managed Extensibility Framework as a tool that "enables greater reuse of applications and components". And I'm sure it does, although building the plumbing for dynamic loading of components and then including those same components in the EXE file seems to be the long way around for the problem. And that is the clue for the surprise in MEF.

MEF loads components from a catalog. A catalog holds assemblies, although some catalogs hold not assemblies but other catalogs. In this way, an application can search multiple catalogs for components.

Catalogs are located by URI, which means that they can be anywhere on the internet. They do not have to be on your local PC. So you could deploy a server and host commonly used components, build applications, and configure them to use a combination of local components and server-based components.

Here's what I think Microsoft has in mind: A new generation of PC applications (say, MS Office version 14) with small PC applications that connect to Microsoft servers for key functionality. Only the Microsoft servers will host these key classes, and you will have to talk to their servers to get them. This gives Microsoft two opportunities.

First, Microsoft can use a paywall to force people to pay for their applications. Rather than buy a shiny disc with MS Office, the starter kit will be free, but any important functionality (like printing) will cost you every time you use it. (Or possibly a monthly subscription.)

Second, Microsoft can provide different levels of functionality to people who pay different amounts. They can provide basic functionality to people who pay for "basic" services and enhanced functionality to people who pay more. Premium users will have more fonts, more charting options, bigger spreadsheets, longer documents, automatic indexing, and goodness knows what else.

Such a model gives Microsoft a constant revenue stream, lets them compete with Google Docs and Open Office, eliminates the upgrade headaches for both themselves and their users, and locks people into Windows (since there won't be an MEF-based app for Macintosh or Linux). Microsoft would be foolish to not take advantage of such a technology.

Managed Extensibility Framework purports to offer reuse and a standard plug-in model. It also, quietly, offers Microsoft a future on the internet.


Monday, September 6, 2010

Language fork in the road

We're coming to a fork in the road for programming languages.

On one side is the set of dynamic languages. These include Ruby, Lua, and possibly Python and C#.

The other side has the functional languages, which include Scala, Haskell, Erlang, and F#.

I'm convinced that one of the two (either dynamic or functional) will be the next big thing. I'm also pretty sure that it will be only one of the two, not both.

The new languages will not replace our current ones, but supplement them. Existing applications will continue to use existing languages. Web applications (that is, non-cloud web apps) will stay with Java, ASP.NET, Javascript, and SQL for database access. Traditional PC applications will stay in C++. Mainframe accounting and finance systems will stay in COBOL.

The new languages will be used for the new applications. The applications will expand the universe of programs, not replace existing programs. (Well, they may replace some existing applications, just as word processing on PCs replaced text processing on minicomputers.)

But the question remains: which of the two will be the dominant model?

Dynamic languages have two advantages: they are easy to use and they are popular. The dynamic languages are the scripting languages but with more power and cleaner syntax. They are used by a lot of programmers today, and more programmers are joining.

The functional languages have the advantage of scalability, which makes them well-suited for cloud applications. (Some have argued that the cloud will require functional languages because of the scaling requirements.)

Dynamic languages have the spectre of unreadable, nightmare code, especially if a less-than-talented programmer attempts to do things. Functional languages have the challenge of rigorous thinking to achieve even the simplest of results. You have to think like a mathematician, not a programmer, to use them.

Right now, I'm giving the edge to the dynamic languages. My choice is driven by the popularity of dynamic languages. Functional languages may be the better quality, but just as VHS won out over Betamax due to popularity, I think the dynamic languages will win out over functional. (With much gnashing of teeth by the advocates of functional programming.)


Thursday, September 2, 2010

Cloud computing is the mainframe model... sort of

I'm reading "The Joy of Minis and Micros: Data Processing with Small Computers", a collection of essays published in the Magazine "Computer Decisions" between 1974 and 1978. The articles explain quite a bit about the minicomputer market and how it compared to the then-mainframe and then-microcomputer markets. (The articles predate the IBM PC, so the name "microcomputer" refers to possibly the Altair, the Apple II, and possibly the Radio Shack TRS-80 model I.)

The minicomputer market was quite a departure from the mainframe market. The thing that made minicomputers different from mainframes wasn't their size (although they were smaller) or their cost (although they were less expensive), but the support model. Mainframes were big, complex beasts that needed specialized care and feeding, and when you bought (or more likely leased) one, you also paid the vendor for the support staff. The specialists included the programmers, the operators, the service techs, and the account representative.

Minicomputers were a completely different game. You often bought the equipment (although you could lease it if you wanted) and you got nothing in terms of support personnel. You uncrated the hardware, installed it, connected it, and ran the operation. You often did not need a special "computer room" with air conditioning and special power lines.

Microcomputers changed the model again. Not only were microcomputers smaller and cheaper than minicomputers, but they came with practically no software and you wrote a lot of your own. The folks who used microcomputers were usually hobbyists or enthusiasts, people with a few extra dollars and lots of time.

To use a metaphor, mainframes were established cities with city hall and the taxes to run it. Minicomputers were small towns with a voluteer council. Microcomputers were single farmhouses out on the prairie and required self-sufficiency.

Large corporations aren't too keen on employees that are self-sufficient. It requires employees to have varied skills, which drives up the salary (employees can work for lots of other companies) and it prevents an employee from developing specialized knowledge and possibly working at less than maximum efficiency. It's no surprise that large corporations adopted PCs reluctantly and have guided the development of PCs into "workstations" as part of a "domain" -- effectively moving PCs to the minicomputer model.

Cloud computing pushes technology towards the mainframe model. When you buy into a cloud, you're not buying the hardware, but the computing power, much like the mainframe model of leasing the equipment. The difference is that the computind hardware is not installed on your premises, but in the vendor's data center. (One could argue that cloud computing is a re-make of the timesharing model of the 1970s.)

Also with cloud computing, you're not performing the administration, but letting the vendor handle that work. This, too, is similar to the mainframe model, with the difference being that the administrators are not on site.

I think large corporations will find the cloud model attractive -- just as large corporations found the mainframe model attractive. I also predict that smaller companies and individuals will look for independent and lower-cost solutions. Look to see services for on-line virtual computing services, possibly hosted in cloud environments but sold differently.

Large corporations will want enterprise cloud solutions with access controls on users and billing by department and project. Small companies will be less concerned with department-level billing and access; they will probably want multiple virtual PCs that can share data. Individual users will want single PCs with lots of control over them.

The succesful cloud vendor will tailor their offerings to the desires of their customers. The vendors who offer a "one size fits all" will probably offer large, enterprise-scale solutions and find little demand from any but the largest companies.