Monday, September 30, 2013

Agile project cannot overrun the budget

Some recent trade rags published stories about Agile-run projects and how they overran their budgets. This seems wrong, based on what I know about Agile methods.

First, a little history. Before "agile" projects, the dominant project method was "waterfall". Waterfall is a method for running a project. All projects need a number of tasks: analysis, design, development, and testing. The waterfall process puts these in strict sequence, with each occurring before the next phase. Thus, all analysis is performed, then all design, and then all development.

The beauty of waterfall is the schedule. The waterfall method promises a specific deliverable with a specific level of quality on a specific date.

The horror or waterfall is that it rarely delivers on schedule. It is quite common for waterfall projects to overrun their schedule. (Their cost estimates, too.)

Agile is not waterfall. The agile method takes a different approach and makes a different promise. It does not promise a specific deliverable on a specific date. Instead, it makes a promise of deliverability.

An agile project consists of a number of iterations. (Some people call these iterations "sprints". I'm not too attached to either name.) Each iteration adds a small set of features. Each iteration also uses automated tests to ensure that previous features still work. The project starts with a small set of initial features and adds features on each iteration. But the important idea is that the end of each iteration is a working product. It may not have everything you want, but everything that it has does work.

The promise of agile is that you can always deliver your product.

With the agile process, one cannot overrun a budget. (You can always stop when you want.) You can, instead, underdeliver. You may get to the end of your allotted time. You may use all of your allotted funds. But you always have something to deliver. If you run over your budget or schedule, it's because you chose to spend more time or money.

Sunday, September 22, 2013

The microcomputers of today

The microcomputer revolution was started with the MITS Altair 8800, the IMSAI 8080, and smaller computers such as the COSMAC ELF. They were machines made for tinkerers, less polished than the Apple II or Radio Shack TRS-80. They included the bare elements needed for a computer, often omitting the case and power supply. (Tinkerers were expected to supply their own cases and power supplies.)

While less polished, they showed that there was a market for microcomputers, and inspired Apple and Radio Shack (and lots of other vendors) to made and sold microcomputers.

Today sees a resurgence of small, "unpolished" computers that are designed for tinkerers. They include the Arduino, the Raspberry Pi, the Beaglebone, and Intel's Minnowboard system. Like the early, pre-Apple microcomputers, these small systems are the bare essentials. (Including omitting the power supply and case.)

And like the earlier microcomputer craze, they are popular.

What's interesting is that there are no major players in this space. There are no big software vendors supplying software for these new microcomputers.

There were no major software vendors in the early microcomputer space. These systems were offered for sale with minimal (or perhaps zero) software. The CP/M operating system was adopted by users and adapted to their systems. CP/M's appeal was that it could be (relatively, for tinkerers) easily modified for specific systems.

The second generation of microcomputers, the Apple II and TRS-80 and their contemporaries, had a number of differences from the first generation. They were polished: they were complete systems with cases, power supplies, and software.

The second generation of microcomputers had a significant market for software. There were lots of vendors, the largest being Digital Research and Microsoft. Microsoft made its fortune by supplying its BASIC interpreter to just about every hardware vendor.

That market did not include the major players from the mainframe or minicomputer markets. Perhaps they thought that the market dynamics was not profitable -- they had been selling software for thousands of dollars (or tens of thousands) and packages in the microcomputer market sold for hundreds (or sometimes tens).

It strikes me that Microsoft is not supplying software to these new microcomputers.

Perhaps they think that the market dynamics are not profitable.

But these are the first generation of new microcomputers, the "unpolished" systems, made for tinkerers. Perhaps Microsoft will make another fortune in the second generation, as they did with the first microcomputer revolution.

Or perhaps another vendor will.

Thursday, September 19, 2013

Nirvanix collapse is not a cloud failure

The cloud service Nirvanix announced this week that it was closing its doors, and directing its customers to take their data elsewhere. Now, people are claiming this is a failure of cloud technology.

Let's be clear: the problems caused by Nirvanix are not a failure of the cloud. They are a business failure for Nirvanix and a supplier failure for its clients.

Business rely on suppliers for many items. No business is totally independent; business rely on suppliers for office space, computers, printers and paper, electricity, accounting and payroll services, and many other goods and services.

Suppliers can fail. Failure can be small (a delayed delivery, or the wrong item), or large (going out of business). Business must evaluate their suppliers and the risk of failure. Most supplies are commodities and can be easily found through competing suppliers. (Paper, for example.)

Some suppliers are "single-source". Apple, for instance, is the only supplier for its products. IBM PC compatibles are available from a number of sources (Dell, Lenovo, and HP) but MacBooks and iPads are available only from Apple.

Some suppliers are monopolies, and therefore also single sources. Utility companies are often local monopolies; you have exactly one choice for electric power, water, and usually cable TV.

A single-source supplier is a higher risk than a commodity supplier. This is obvious; when a commodity supplier fails you can go to another supplier for the same (or equivalent) item, and when a single-source supplier fails you cannot. It is common for businesses to look for multiple suppliers for the items they purchase.

Cloud services are, for the most part, incompatible, and therefore cloud suppliers are single-source. I cannot easily move my application from Amazon's cloud to Microsoft's cloud, for example. Being single-source, there is a higher risk involved with using them.

Yet many clients of cloud services have bought the argument "when you put something into the cloud, you don't have to worry about administration or back-up". This is false. Of course you have to worry about administration and back-up. You may have less involvement, but the work is still there.

And you also have the risk of supplier failure.

Our society chooses to regulate some suppliers. Utility companies are granted monopolies for efficiency (it makes little sense to run multiple water or power distribution networks) and are regulated to prevent failures. Some non-monopoly companies, such as banks and electricians are regulated for safety of the economy or people.

Other companies, such as payroll companies, are not regulated, and clients must examine the health of a company before committing to them.

I expect that cloud services will be viewed as accounting services: important but not so important as to need regulation. It will be up to clients to choose appropriate suppliers and make contingency plans for failures.

Wednesday, September 18, 2013

Big Data proves the value of open source

Something significant happened with open source software in the past two years. An event that future historians may point to and say "this is when open source software became a force".

That event is Big Data.

Open source has been with us for decades. Yet for all the technologies we have, from the first plug-board computers to smart phones, from the earliest assemblers to the latest language compilers, from the first IDE to Visual Studio, open source software has always copied the proprietary tools. Open source tools have always been implementations of existing ideas. Linux is a functional copy of Unix. The open source compilers and interpreters are for existing languages (C, C++, Fortran, Java). LibreOffice and Open Office are clones of Microsoft Office. Eclipse is an open source IDE, an idea that predates the IBM PC.

Yes, the open source versions of these tools have their own features and advantages. But the ideas behind these tools, the big concepts, are not new.

Big Data is different. Big Data is a new concept, a new entry in the technology toolkit, and its tools are (just about) all open source. Hadoop, NoSQL databases, and many analytics tools are open source. Commercial entities like Oracle and SAS may claim to support Big Data, their support seems less "Big Data" and more "our product can do that too".

A few technologies came close to being completely open source. Web servers are mostly open source, with stiff competition from Microsoft's (closed source) IIS. The scripting languages (Perl, Python, and Ruby) are all open source, but they are extensions of languages like AWK and the C Shell, which were not initially open source.

Big Data, from what I can see, is the first "new concept" technology that has a clear origin in open source. It is the proof that open source can not only copy existing concepts, but introduce new ideas to the world.

And that is a milestone that deserves recognition.

Tuesday, September 17, 2013

When programming, think like a computer

When programming, it is best to think like a computer. It is tempting to think like a human. But humans think very differently than computers (if we allow that computers think), and thinking like a human leads to complex programs.

This was brought home to me while reading William Conley's "Computer Optimization Techniques" which discusses the solutions to Integer Programming problems and related problems. Many of these problems can be solved with brute-force calculations, evaluating every possible solution and identifying the most profitable (or least expensive).

The programs for these brute-force methods are short and simple. Even in FORTRAN, they run less than fifty lines. Their brevity is due to their simplicity. There is no clever coding, no attempt to optimize the algorithm. The programs take advantage of the computer's strength of fast computation.

Humans think very differently. They tire quickly of routine calculations. They can identify patterns and have insights into shortcuts for algorithms. They can take creative leaps to solutions. These are all survival skills, useful for dealing with an uncertain environment and capable predators. But they are quite difficult to encode into a computer program. So hard that it is often more efficient to use brute-force calculations without insights and creative leaps. The time spent making the program "smart" is larger than the time saved by the improved program.

Brute-force is not always the best method for calculations. Sometimes you need a smart program, because the number of computations is staggering. In those cases, it is better to invest the time in improvements. (To his credit, Conley shows techniques to reduce the computations, sometimes by increasing the complexity of the code.)

Computing efficiency (that is, "smart" programs) has been a concern since the first computing machines were made. Necessary at first, the need for efficiency drops over time. Mainframe computers became faster, which allowed for "sloppy" programs ("sloppy" meaning "anything less than maximum efficiency").

Minicomputers were slower than mainframes, significantly less expensive, and another step away from the need for optimized, "smart" programs. PCs were another step. Today, smart phones have more computing power than PCs of a few years ago, at a fraction of the price. Cloud computing, a separate branch in the evolution of computing, offers cheap, readily-available computing power.

I won't claim that computing power is (or will ever be) "too cheap to meter". But it is cheap, and it is plentiful. And with cheap and plentiful computing power, we can build programs that use simple methods.

When writing a computer program, think like a computer. Start with a simple algorithm, one that is not clever. Chances are, it will be good enough.

Monday, September 16, 2013

Software recalls

The IEEE Spectrum web site reports that a software package was recalled. United Healthcare recalled something called "Picis ED Pulsecheck"; the problem relates to notes made by physicians (who would be the users).

I recognize that software for medical records is important. Defects in "normal" software can lose information, but defects in medical records can lead to incorrect diagnoses, incorrect treatments, and complications to the patient -- even death. Software in the medical domain must be correct.

Yet the idea of a "recall" for software seems primitive. Unusual, also; the is the first time I heard of a recall for software.

Recalls make sense for physical products. Physical products that pose some danger to the owner, like automobiles with faulty brake systems or lamps with incorrect wiring.

A recall forces the owner to return the product, or bring the product to a service center where it can be repaired.

Software is different from a physical product. It doesn't exist in a tangible form. For recalls, manufacturers must keep careful records of the repairs made to each unit. I can install software on several computers; do I bring in each copy?

But more than the number of copies, the basic idea of a recall for software seems... wrong. Why force someone to remove software from their location and :bring it in for repair"?

Why not send an update?

In software, we don't think of recalls. Instead we think of updates. (Or "patches", depending on our software subculture.)

All of the major software manufacturers (Microsoft, Apple, Adobe) send updates for their software. Today, those updates are delivered through the internet. Now, perhaps the medical software is on systems that are not connected to the internet (a reasonable security precaution) but updates can be delivered through means other than the internet.

Now, maybe United Healthcare has a good reason for issuing a recall and not sending out updates. Maybe their product is covered by federal or state laws that mandate recalls. Or maybe their corporate mindset is one of products and liability, and they choose to issue recalls. I don't know. (United Healthcare chose not to consult with me before issuing the recall.)

It's not important what United Healthcare does, or why. It's important what you do with your software and your customers. You can issue recalls (if you want) or updates (if you want) or both -- or neither. I encourage you to think about the choices you make. That's the important item here.

Sunday, September 15, 2013

Virtualization and small processors

From the beginning of time (for electronic data processing) we have desired bigger processors. We have wanted shorter clock cycles, more bits, more addressable memory, and more powerful instruction sets, all for processing data faster and more efficiently. With time-sharing we wanted additional controls to separate programs, which lead to more complex processors. With networks and malware we added additional complexity to monitor processes.

The history of processors as been a (mostly) steady upwards ramp. I say "mostly" because the minicomputer revolution (ca. 1965) and microcomputer revolution (1977) saw the adoption of smaller, simpler processors. Yet these smaller processors also increased in complexity, over time. (Microprocessors started with the humble 8080 and advanced to the Z-80, the 8086, the 80286, eventually leading to today's Pentium-derived processors.)

I think that virtualization gives us an opportunity for smaller, simpler processors.

Virtualization creates a world of two levels: the physical and the virtual. The physical processor has to keep the virtual processes running, and keep them isolated. The physical processor is a traditional processor and follows traditional rules: more is better, and keep users out of each others' hair.

But the virtual processors, they can be different. Where is it written that the virtual processor must be the same as the host processor? We've built our systems that way, but is it necessary?

The virtualized machine can be smaller than the physical host, and frequently is. It has less memory, smaller disks, and in general a slower (and usually simpler) processor. Yet a virtual machine is still a full PC.

We understand the computing unit known as a "PC". We've been virtualizing machine in these PC units because it has been easy.

A lot of that "standard PC" contains complexity to handle multiple users.

For cheap, easily created virtual machines, is that complexity really necessary?

It is if we use the virtual PC as we use a physical PC, with multiple users and multiple processes. If we run a web server, then we need that complexity.

But suppose with take a different approach to our use of virtual machines. Suppose that, instead of running a complex program like a web server or a database manager, we handle simple tasks. Let's go further and suppose that we create a virtual machine that is designed to handle only one specific task, and that one task is trivial in comparison to our normal workload.

Let's go even further and say that when the task is done, we destroy the virtual machine. Should we need it again, we can create another one to perform the task. Or another five. Or another five hundred. That's the beauty of virtual machines.

Such a machine would need less "baggage" in its operating system. It would need, at the very least, some code to communicate with the outside world (to get instruction and report the results), the code to perform the work, and... perhaps nothing else. All of the user permissions and memory management "stuff" becomes superfluous.

This virtual machine something that exists between our current virtual PC and an object in a program. This new thing is an entity of the virtualization manager, yet simpler (much simpler) than a PC with operating system and application program.

Being much simpler than a PC, this small, specialized virtual machine can use a much simpler processor design. It doesn't need virtual memory management -- we give the virtual processor enough memory. It doesn't need to worry about multiple user processes -- there is only one user process. The processor has to be capable of running the desired program, of course, but that is a lot simpler than running a whole operating system.

A regular PC is "complexity in a box". The designers of virtualization software (VMware, VirtualPC, VirtualBox, etc.) expend large efforts at duplicating PC hardware in the virtual world, and synchronizing that virtual hardware with the underlying physical hardware.

I suspect that in many cases, we don't want virtual PCs. We want virtual machines that can perform some computation and talk to other processors (database servers, web servers, queue servers, etc.).

Small, disposable, virtual machines can operate as one-time use machines. We can instantiate them, execute them, and then discard them. These small virtual machines become the Dixie cups of the processing world. And small virtual machines can use small virtual processors.

I think we may see a renewed interest in small processor design. For virtual processors, "small" means simple: a simple instruction set, a simple memory architecture, a simple system design.

Wednesday, September 11, 2013

Specialization can be good or bad

Technologies have targets. Some technologies, over time, narrow their targets. Two examples are Windows and .NET.

Windows was, at first, designed to run on multiple hardware platforms. The objective was an "operating environment" that would give Microsoft an opportunity to sell software for multiple hardware platforms. There were versions of Windows for the Zenith Z-100 and the DEC Rainbow; these computers had Intel processors and ran MS-DOS but used architectures different from the IBM PC. Later versions of Windows ran on PowerPC, DEC Alpha, and MIPS processors. Those variants have all ceased; Microsoft supports only Intel PC architecture for "real" Windows and the new Windows RT variant for ARM processors, and both of these run on well-defined hardware.

The .NET platform has also narrowed. Instead of machine architectures, the narrowing has been with programming languages. When Microsoft released .NET, it supplied compilers for four languages: C++, C#, Visual Basic, and Visual J#. Microsoft also made bold proclamations about the .NET platform supporting multiple languages; the implications were that other vendors would build compilers and that Java was a "one-trick pony".

Yet the broad support for languages has narrowed. It was clear from the start that Microsoft was supporting C# as "first among equals". The documentation for C# was more complete and better organized than the documentation for other languages. Other vendors did provide compilers for other languages (and some still do), but the .NET world is pretty much C# with a small set of VB fans. Microsoft's forays into Python and Ruby (the IronPython and IronRuby engines) have been spun off as separate projects; the only "expansion" language from Microsoft is F#, used for functional programming.

Another word for this narrowing of technology is "specialization". Microsoft focussed Windows on the PC platform; the code become specialized. The .NET ecosystem is narrowing to C#; our code is becoming specialized.

Specialization has its advantages. Limiting Windows to the PC architecture reduced Microsoft's costs and enabled them to optimize Windows for the platform. (Later, Microsoft would become strong enough to specify the hardware platform, and they made sure that advances in PC hardware meshed with improvements in Windows.)

Yet specialization is not without risk. When one is optimized for an environment (such as PC hardware or a language), it is hard to move to another environment. Thus, Windows is a great operating system for desktop PCs but a poor fit on tablets. Windows 8 shows that significant changes are needed to move to tablets.

Similarly, specializing in C# may lead to significant hurdles when new programming paradigms emerge. The .NET platform is optimized for C# and its object-oriented roots. Moving to another programming paradigm (such as functional programming) may prove difficult. The IronPython and IronRuby projects may provide some leverage, as may the F# language, but these are quite small compared to C# in the .NET ecosystem.

Interestingly, the "one-trick pony" environment for Java has expanded to include Clojure, Groovy, and Scala, as well as Jython and JRuby. So not all technologies narrow, and Sun's Oracle's Java may avoid the trap of over-specialization.

Picking the target for your technology is a delicate balance. A broad set of targets leads to performance issues and markets with little return. A narrow set of targets reduces costs but foregoes market penetration (and revenue) and may leave you ill-prepared for a paradigm shift. You have to chart your way between the two.

I didn't say it would be easy.

Monday, September 9, 2013

Microsoft is not DEC

Some have pointed out the comparisons of Microsoft to the long-ago champion of mini-computers DEC.

The commonalities seem to be:

  • DEC and Microsoft were both large
  • DEC and Microsoft had strong cultures
  • DEC missed the PC market; Microsoft is missing the mobile market
  • DEC and Microsoft changed their CEOs

Yet there are differences:

DEC was a major player; Microsoft set the standard DEC had a successful business in minicomputers but was not a standard-setter (except perhaps for terminals). There were significant competitors in the minicomputer market, including Data General, HP, and even IBM. Microsoft, on the other hand, has set the standard for desktop computing for the past two decades. It has an established customer base that remains loyal to and locked into the Windows ecosystem.

DEC moved slowly; Microsoft is moving quickly DEC made cautious steps towards microcomputers, introducing the PRO-325 and PRO-350 computers which were small versions of PDP-11 processors running a variant of RT-11, a proprietary and (more importantly) non-PC-DOS operating system. DEC also offered the Rainbow which ran MS-DOS but did not offer the "100 percent PC compatibility" required for most software. Neither the PRO and Rainbow computers saw much popularity. Microsoft, in contrast, is offering cloud services with Azure and seeing market acceptance. Microsoft's Surface tablets and Windows Phones (considered quite good by those who use them, and quite bad by those who don't) do parallel DEC's offerings in their popularity, and this will be a problem for Microsoft if they choose to keep offering hardware.

The IBM PC set a new standard; mobile/cloud has no standard The IBM PC defined a new standard for microcomputers (the new market). Overnight, businesses settled on the PC as the unit of computing, with PC-DOS as the operating system and Lotus 1-2-3 as the spreadsheet. The mobile/cloud environment has no comparable standard hardware or software. Apple and Android are competing for hardware (Apple has higher revenue while Android has higher unit sales) and Amazon.com is dominant in the cloud services space but not a standards-setter. (The industry is not cloning the AWS interface.)

PCs replaced minicomputers; mobile/cloud complements PCs Minicomputers were expensive and PCs (except for the very early microcomputers) were able to perform the same functions as minicomputers. PCs could perform word processing, numerical analysis with spreadsheets (a bonus, actually), data storage and reporting, and development in common languages such as BASIC, FORTRAN, Pascal, C, and even COBOL. Tablets do not replace PCs; data entry, numeric analysis, and software development remains on the PC platform. The mobile/cloud technology expands the set of solutions, offering new possibilities.

Comparing Microsoft to DEC is a nice thought experiment, but the situations are different. Was DEC under stress, and is Microsoft under stress? Undoubtedly. Can Microsoft learn from DEC's demise? Possibly. But Microsoft's situation is not identical to DEC's, and the lessons from the former must be read with care.

Sunday, September 8, 2013

The coming problem of legacy Big Data

With all the fuss about Big Data, we seem to have forgotten about the problems of legacy Big Data.

You may think that Big Data is too new to have legacy problems. Legacy problems affect old systems, systems that were designed and built by Those Who Came Before And Did Not Know How To Plan For The Future. Big Data cannot possibly have those kinds of problems, because 1) the systems are new, and 2) they have been built by us.

Big Data systems are new, which is why I say that the problems are coming. The problems are not here now. But they will arrive, in a few years.

What kind of problems? I can think of several.

Data formats Newer tools (or newer versions of existing tools) change the formats of data and cannot read old formats. (For example, Microsoft Excel, which cannot read Lotus 1-2-3 files.)

Data value codes Values used in data to encode specific ideas, changed over time. These might be account codes, or product categories, or status codes. The problem is not that you cannot read the files, but that the values mean things other than what you think.

Missing or lost data Non-Big Data (should that be "Small Data"?) can be easily stored in version control systems or other archiving systems. Big Data, by its nature, doesn't fit well in these systems. Without an easy way to back up or archive Big Data, many shops will take the easy way and simply not make copies.

Inconsistent data Data sets of any size can hold inconsistencies. Keeping traditional data sets consistent requires discipline and proper tools. Finding inconsistencies in larger data sets is a larger problem, requiring the same discipline and mindset but perhaps more capable tools.

In short, the problems of legacy Big Data are the same problems as legacy Small Data.

The savvy shops will be prepared for these problems. They will put the proper checks in place to identify inconsistencies. They will plan for changes to formats. They will ensure that data is protected with backup and archive copies.

In short, the solutions to the problems of legacy Big Data are the same solutions to the problems of legacy Small Data.

Thursday, September 5, 2013

Measure code complexity

We measure many things on development projects, from the cost to the time to user satisfaction. Yet we do not measure the complexity of our code.

One might find this surprising. After all, complexity of code is closely tied to quality (or so I like to believe) and also an indication of future effort (simple code is easier to change than complicated code).

The problem is not in the measurement of complexity. We have numerous techniques and tools, spanning the range from "lines of code" to function points. There are commercial tools and open source tools that measure complexity.

No, the problem is not in techniques or tools.

It is a matter of will. We don't measure complexity because, in short, we don't want to.

I can think of a few reasons that discourage the measurement of source code complexity.

- The measurement of complexity is a negative one. That is, more complexity is worse. A result of 170 is better than a result of 270, and this inverted scale is awkward. We are trained to like positive measurements, like baseball scores. (Perhaps the golf enthusiasts would see more interest if they changed their scoring system.)

- There is no direct way to connect complexity to cost. While we understand that a complicated code base is harder to maintain that a simple one, we have no way of converting that extra complexity into dollars. If we reduce our complexity from 270 to 170 (or 37 percent), do we reduce the cost of development by the same percentage? Why or why not? (I suspect that there is a lot to be learned in this area. Perhaps several Masters theses can be derived from it.)

- Not knowing the complexity shifts risk from managers to developers. In organizations with antagonistic relations between managers and developers, a willful ignorance of code complexity pushes risk onto developers. Estimates, if made by managers, will ignore complexity. Estimates made by developers may be optimistic (or pessimistic) but may be adjusted by managers. In either case, schedule delays will be the fault of the developer, not the manager.

- Developers (in shops with poor management relations) may avoid the use of any metrics, fearing that they will be used for performance evaluations.

Looking forward, I can see a time when we do measure code complexity.

- A company considering the acquisition of software (including the source code), may want an unbiased opinion of the code. They may not completely trust the seller (who is biased towards the sale) and they may not trust their own people (who may be biased against 'outside' software).

- A project team may want to identify complex areas of their code, to identify high-risk areas.

- A development team may wish to estimate the effort for maintaining code, and may include the complexity as a factor in that effort.

The tools are available.

I believe that we will, eventually, consider complexity analysis a regular part of software development. Perhaps it will start small, like the adoption of version control and automated testing. Both of those techniques were at one time considered new and unproven. Today, they are considered 'best practices'.

Wednesday, September 4, 2013

Javascript is the new BASIC

The idea that Javascript is the new BASIC is not unique. Others have made the same observation, here and here. I will add my humble thoughts.

BASIC, not Microsoft's Visual Basic but the elder brother with line numbers, was the popular language at the beginning of the personal computer era.

The popularity of BASIC is not surprising. BASIC was easy to learn and just about every microcomputer had it, from the Apple II to the Commodore PET to the Radio Shack TRS-80. Books and magazine articles discussed it.

Alternate languages were available, for some computers. The system I used (a Heathkit H-89) ran the HDOS and CP/M operating systems, and there were compilers for FORTRAN, COBOL, C, and Pascal. But these other languages were expensive: a FORTRAN compiler cost $150 and COBOL $395 (in 1980 dollars).

The biggest competitor to BASIC was assembly language. Assemblers were modestly priced, but the work necessary for the simplest of tasks was large.

BASIC was available, and we used it.

BASIC wasn't perfect. It was designed as a teaching language and had limited programming constructs. While it had an 'IF' statement, most variants had no 'IF/ELSE' and none had a 'WHILE' loop. Variable names were a single letter and an optional digit. It had no support for object-oriented programming. It was interpreted, which carried the double drawbacks of poor performance and visible source code. Your programs were slow, and the only way to distribute them was by giving away the source (which, given BASIC's limitations, was unreadable for a program of any significant size).

BASIC was popular and reviled at the same time. Dykstra famously declared "It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration." But it wasn't just him; we all, deep down, knew that BASIC was broken.

We were forced to program around BASIC's limitations. We learned some good habits and lots of bad ones, some which haunt us to this day. Yet it's limitations forced us to think about the storage of data, the updating of files, and the complexity of calculations.

We also looked forward to new versions of BASIC. While some computers had BASIC baked into ROM (the Commodore C-64 and the IBM PC), other computers had ways of using new versions (the IBM PC had a 'BASICA' that came with PC-DOS).

BASIC was not just the language of the day but the language of the future.

Today, Javascript is the language that is available and easy to learn. It is not baked into ROMs (well, not usually) but it is baked into browsers. Information about Javascript is available: there are lots of web pages and books.

Like BASIC, Javascript is not perfect. No one (to my knowledge) has claimed that learning Javascript will permanently stunt your programming skills, but the feeling I get from Javascript programmers is similar to the feeling I remember about BASIC programmers: They use the language and constantly hope for something better. And they are working around Javascript's limitations.

BASIC was the language that launched the PC revolution. What will Javascript bring?