Thursday, July 29, 2010

Clouds in my coffee, and my future

Just what is cloud computing? It's a little early to say, but here's my prediction:

Cloud computing is a configuration of hardware and software that offers new possibilities for applications.

We've been here before. In the early 1980s, the IBM PC and its clones offered a new combination of hardware and software. That combination opened the way for new applications, from word processing and spreadsheets to personal information managers. The model was personal computing, with programs and data for an individual. The computers were isolated (no networks) and considered "islands of automation".

The second wave was client/server applications. Made possible by network hardware and software, these applications served businesses. Thet let multiple people share a common set of data. (In limited ways, perhaps, but the sharing was the key to client/server.)

The third wave was web applications.. These applications were made possible by the internet, connecting corporate and personal networks. (Most personal networks were a single computer, but work with me on this.) Hardware and software for servers, combined with hardware and software for the remote computers made Yahoo and web shopping possible.

Cloud computing is the fourth wave. It follows the same pattern as previous waves: new hardware and software that enables new types of applications. For cloud computing, the new hardware and software is:

- Pervasive network (including mobile communications)
- Inexpensive servers (processors, disks, memory, operating systems, and web servers)
- Inexpensive and mobile clients
- Virtualization technologies
- Programming languages that support the scaling of applications

The pervasive network is clearly needed. You need access to apps not just from your desktop, but from your phone.

You need the abilitiy to scale up an application. Facebook has 500 million users, a number that was unthinkable for the previous generations of technologies.

The programming languages and the constructs they offer will support the cloud. The scale issues of the cloud require that processes move from one server to another (or be capable of moving) to handle load distribution and hardware failures.

The big difference between a web app and a cloud app is that the processing for a cloud app can move from one processor to another. Traditional web apps are limited to a single server.

And we have seen this transition before. Unless you are new to the industry, you've been through the mainframe-to-PC transition, the PC-to-client/server change, and the web revolution.

Each change brings new types of applications and uses without killing the previous set. PCs grew but did not kill off mainframes. (Eventually they did take some "business" from the mainframes, but not so much as to drive them to extinction.) The web did not kill off client/server apps (client/server vendors did a pretty good job of killing themselves with their arrogance and expensive support services), and cloud computing will not kill off PC apps or web apps.

After the cloud revolution, we will still use Microsoft Office and Google search.

And we'll use a bunch of new applications too. I don't know what they will be. Facebook and Twitter are the pioneer apps, just as Visicalc was the pioneer PC app. Facebook and Twitter may thrive in the Cloud Age or they may fade. That's what makes it exciting!


Wednesday, July 28, 2010

Waiting for the ATM

The Harvard Business Review has an article on customers and their preference to use self-serve kiosks and web pages over personal interactions with real people.

The popularity of self-service kiosks must pose a dilemma for business marketers. They want to reduce costs and self-service are definitely a way to do it.

Yet I suspect that the self-service kiosks were a compromise, a business Faustian bargain to reduce costs. Marketers also want the "personal touch" -- to sell us more stuff.

We customers have had self-service things crammed down our throats since the 1980s with ATMs and telephone torture menus. We were skeptical at first, and sometimes even hostile. But after a while, we accepted them. Now it seems we like them.

The HBR article talks about a few reasons behind the motivations of customers. They talk about efficiency of self-service kiosks (a valid point) and the control aspect (also a valid point), but miss what I consider the big reasons.

I speculate that customers avoid personal service representatives for two reasons, beyond the efficiency and control aspects.

Reason one: Customers are afraid that the service rep will push some other product or service on them. This puts the customer in an awkard situation: either they say "yes" and regret the purchase, or they say "no" and regret rejecting another human being. (I suppose a few folks can say "no" with no regrets at all. But the number is small -- and I suspect they have issues.) Either way, the customer walks away with a sour feeling.

Reason two: Customers are afraid that the service will be poor. Not just slow, but agonizingly slow, or perhaps even incompetent. A good way to ruin my bright sunny morning is to stop by the Post Office and partake in their ruthless drive for complacency. Transactions that (in my mind) should take one minute end up taking several. Repeat that for the ten people in front of me, and the recipe yields frustration. Yet slow is a small problem compared to wrong information or the inability to deliver service. Businesses have trimmed their workforce and trained their customer-facing personnel to handle the typical cases -- and only the typical cases. The unusual case stymies the average customer service representative, with more delays as they consult with a manager. In some cases, I don't get answers. (And this is service?)

Given the state of personal, face-to-face service, it is much better to deal with a machine.

Of course, the marketers have not asked us if we even want a personal relationship. They take it on faith that the personal touch will make us feel appreciated. (And it does, when it is done well. When it is clumsy, or forced, the effect is insincerity and the awareness of the theater of sales.)

What does this mean for software? Well, an obvious outcome is the continued use of self-service kiosks and web pages. A not-so-obvious conclusion is the use of better support software for the poorly trained sales droids.

But perhaps the biggest lesson is to understand your customers and deliver quality service, regardless of your industry.


Sunday, July 25, 2010

Three big ideas from OSCON 2010

I attended OSCON 2010, and turned my brain to mush with all of the information available. Here are the three big ideas I took away from the conference:

Cloud computing is different from web apps Just as web apps were different from PC apps, cloud apps are different from web apps. Web apps addressed a different need in the computing world and needed different programming patterns. New languages line Perl and Python addressed the web app world better than the PC app languages of C, C++, and Pascal. With cloud computing, new patterns are needed, and new languages like Ruby (and Python) handle these patterns better than the old languages.

Infrastructure is important The server room is important, and not to be ignored. You may think that you can forget about servers and datacenters with the cloud, but you are wrong. Just because someone else manages them does not mean you don't have to be aware of them.

Expect the unexpected Of the web technologies that fit cloud computing, don't ignore PHP. There is a lot of interest in PHP; many people are using it for web apps and cloud apps.

The web has served us well. The age of web apps has survived for a decade. Now the cloud apps will have their turn.

Friday, July 23, 2010

Mainframes in your pocket

Some time ago, I worked at a bank. This bank, which was small with only fifteen branch offices and assets of less than a billion dollars) ran their business on an IBM 4331 processor with some disc drives, a few tape drives, and a printer. The processor was a refrigerator-sized box that required special power and a temperature-controlled environment.

From what I can tell, the processing power, memory, and disk capacities of their computer room were smaller than that of today's typical smartphone. (Although perhaps not including the printer.) By today's standards, the bank's computing power is miniscule, less that a college student uses for talking to friends and chatting on Facebook.

Yet even with such an increase in computing power, we are still stingy with it. Or more specifically, large organizations have processes and mindsets that are still stingy with it.

Take for example the process to set of a centralized processing point for some data. And assume that the remote points (either external customers or internal teams) have data not on paper forms but on computers of their own. In other words, you are building a service.

The typical process (for a large organization) is: collect requirements, design the system, specify the transfer formats, build the service (and test it), and have each "user" change their system to use the approved data format.

A recent O'Reilly Radar article noted that some large data projects accept and process data in just about any format. That is, they do not force clients to use a specific format. I find this approach refreshing and efficient.

Long-time project managers and system designers might find that statement odd. After all, we all know that the most efficient way to build a system is to use a standard input format. A standard format reduces the work for the development team and allows for easier testing.

But there are other costs to consider.

Within the organization, such a process pushes the cost to each of the consumer teams. Each and every consumer team must convert their data into the standard format. Thus you have not saved costs but multiplied them.

If your clients are all external customers, then you have transferred the cost of formatting to your customers. Such a process is considered the norm, for some reason. Pushing the task of formatting data onto internal teams (or external customers) is given a free pass. If you were to set hoops for other business tasks (such as demanding invoices be in German with amounts in Renminbi, and all other correspondence in Brazilian Portuguese) they other business users would revolt. (Or worse, quietly take their business elsewhere.) Yet the same customers accept the cost of formatting data.

If you have a single client and it is an internal one, then you have transferred the cost of formatting to that team. With only one user, there is no net cost to the organization -- either the service team incurs the cost or the user team incurs the cost. (I'm assuming that the cost is about the same for both teams.)

But if you have multiple user teams, you have pushed the cost onto all of the user teams. Each team must expend some effort to convert their data into the approved format.

This approach is not minimizing cost to the organization; it is distributing costs to different teams (and probably different budgets).

The counter argument runs thus: forcing the serving team to accept all formats increases their development costs, possibly more than the savings in the consumer teams since the serving team must learn the nuances of the different formats. And this argument is also correct -- when the formats are not commonly accepted formats. When the data is sent in proprietary formats, this argument holds.

The argument fails when the data is in common formats (such as XML, Microsoft Office formats, or Open Office, or YAML, ...). Here the formats are well understood.

At this point, the argument becomes: We cannot use those formats; we don't have the processing power to convert our data into them. We need custom formats for efficiency.

Efficiency?

Really?

When college students walk around with more processing power in their pocket than several IBM 370 mainframes?

I find that hard to believe. We have the processing power. It's time to leverage that power to make business processes more efficient.

Tuesday, July 20, 2010

Effectiveness and efficiency

A certain large company is converting its IT shop from a development organization to a project management organization. It has succumbed to the siren call of outsourcing, and has changed its development process to impose standard processes across all projects. The goals are reduced costs, faster development, and better prediction of project completion (in terms of time and quality).

But it is not admitting that it is changing from a development shop to a project management shop. The transition is obvious to anyone paying attention: centralization of power in the PMO, creation of architect positions, outsourcing of coding, testing, and support, and -- most telling -- the emphasis on training courses for project management. By reading the actions, the transition is clear. But no one on the management team has stepped forward and stated the transition as an objective. (Possibly to avoid the departure of their best programmers.)

The project managers are no "in charge" of the project. They call the shots, make the schedules, and coordinate the activities across teams and within teams. They define the process for all of the teams.

The managers are using the "meta-programmer" model, the one advocated by Watts Humphrey (although he did not use that name). They are putting their trust in the process (and a few key architects) to produce the software with the right functionality, at the projected cost, and within the projected schedule. A few key people will make the design decisions for the software, and the rest of the team will be cogs in the development machine. (Interchangeable, outsourced cogs.)

They think that the process will tighten the distance between the top performers and the bottom performers. That is, they will raise the floor of performance.

The fault is assuming that they can raise the floor without lowering the ceiling. They think that the gains will all be upwards. But the process they impose will narrow the range, not by raising the floor but by lowering the ceiling.

The poor performers on the team will not be helped by the process. They will continue to make mistakes and use poor coding techniques. (What the process may do is identify the poor performers. Their ejection from the project may improve the team's performance.)

The best performers on the team will be hindered by the process. The process fills a technical person's schedule, allocating all time to specific tasks. This leaves no time for creativity and experimentation. At best, the high performers will be frustrated; at worst, they will deliver results of lower quality.

In other words, the ceiling is absolute but the floor is full of holes.

The managers have confused effectiveness with efficiency. They will get an efficient process. But while they will get their desired level of efficiency (so many requirements implemented per release, so many tests run per day, so many customer calls handled per hour), they will lose the the creative contributions of team members. The success of the project will rest on the creativity of the key architects, folks who are removed from the code, the testing, and the customer support tasks.

Software development is still an art. As much as we want to think of it as engineering, as a discipline, as something that can be analyzed and improved, it is not a science. It is not completely predictable. There remains an element of necessary inspiration. Any process that removes that inspiration will result in software that meets specifications but is unusable.

Thursday, July 15, 2010

The temptation of full utilization

People are still talking about "full utilization", meaning the full use of hardware. Even today I see articles on the full utilization of server equipment.

Full utilization is the complete use of the equipment in your organization. Or, taking a different view, purchasing no more than what you need. If you purchase twenty servers but use them at only fifty percent utilization, then your could have purchased only ten servers. Instant savings! 

My thoughts on the notion of "full utilization" are: How quaint! How so nineteen-seventies! While you're at it, break out the disco music and wide lapels!

The notion of full utilization made sense when hardware was expensive. Back in the day, when companies leased time on mainframes with less capacity and power than today's typical smartphone, the idea made sense. Computing power was expensive, and you wanted to use what you paid for and no more. Companies even hired system administrators to tune their systems to keep them running at 95 to 98 percent utilization.

In today's world, we don't have to worry about utilization. Not in the nineteen-seventies sense of the word, at least. Hardware is cheap. I suspect that the purchasing process is more expensive that the server itself, with multiple levels of review and approval in most organizations.

Instead of targeting hardware, look at the overall costs. If you find that the bulk of your costs go to internal processes, then look there for improvements. Go for the biggest bang, the biggest savings.

Cloud computing is your friend in this effort. The cloud vendors are very good at billing for resources, and perhaps not so helpful at tuning your environment. They have little incentive to reduce the monthly bill. But they provide a scalable environment, one that can expand and shrink as you need. With the proper diligence, reporting, and planning, you can manage your hardware costs, yet still scale up when you need. That's what the cloud buys you.

Focus on the true expenses of you organization: people. People are the most expensive (and precious) of your resources -- if you stoop so low as to consider people "resources". Grow your people's skills and help them build their talents. That's where you will find the payback.


Wednesday, July 14, 2010

All the lonely people, where do that all come from?

I've ranted that Microsoft products are designed for big shops. It seems that other folks agree -- although they don't think its a problem.

The July edition of DevProConnections magazine has a guest editorial bu Juval Lowy. (Mr. Lowy is a software architect and a principal of IDesign, specializing in .NET architecture consulting and advanced training. He is also Microsoft's regional director for Silicon Valley.)

His editorial advocates for the positions of architect (an expert in Microsoft Windows Communication Framework), UI designers (experts in Windows Presentation Framework), and technical business analyst (in reality a Microsoft Workflow expert) for each and every development project. In his words: "until industry adopts the design role ... the technologies introduced with .NET 3.0 won't deliver on their potential" (emphasis added). He compares the need for these distinct roles to the need for separate QA members of the development team.

My impression is that this position is at best justification for his consultancy, and at worst a symptom of technology run rampant. In his view, a development team must have the three specialists for WCF, WPF, and WF, in addition to a dedicated test team, before you hire a single programmer. That's a lot of people (and a lot of salary dollars) for a project. (And remember, Lowy recommends this configuration for each project!)

Microsoft systems and tools are designed for big systems and big customers. The large, dozen-programmer projects can afford these distinct roles, and probably should have them. But lots of projects are smaller, with perhaps one or two developers who also run the tests and support the customers. Those projects would be hard pressed to hire additional staff for specialists.

If the technology is so complex that you need multiple people to make it work, even for small projects, then you have failed. At least, you have failed for a large number of potential customers. You succeed for the largest projects, where work is outsourced to multiple teams and you have a big consulting shop on site to manage the project. Big consulting shops make money by supplying resources, and the logic of distinct roles means that can supply the bodies ... and bill for them.

I recognize that the views are Mr. Lowy's and not those of Microsoft. Microsoft may agree or not. Their recent advertisements have proclaimed "designed for big", so they quite possibly may agree.

But in my mind it is the wrong approach.It is chasing the big money and ignoring the little guys. In the short run, yes it works. But things change in the long run. In the long run, the little guys get bigger, and they will remember the tech that made them successful. And they will most likely stick with the tech that they know -- which won't be the big-team tech from Microsoft.


Tuesday, July 13, 2010

When good habits turn bad

Programming is a complex task, and we develop habits to make things easier. Habits let us "forget" certain things, by making them second nature. Usually this is a good thing, since we cannot concentrate on every aspect of the programming task.

For example, the keyboard. I have been using keyboards for many years, and my fingers "know" the location of the keys. I do not have to hunt for specific keys. (I do remember the days when I was unfamiliar with the keyboard and had to search for unfamiliar characters such as the "at" sign.)

Some habits are passed from one generation to the next. If you ask a modern-day programmer to write a "for" loop, they will probably write this:

for (int i(0); i < 10; i++) {     ... }

Modern day languages such as C#, Perl, Python, and Ruby allow for more elegant constructs, but let's assume we want an "old style" "for" loop.

Programmers have the habit of using the variable i for simple loops. Why this variable? I've seen lots of code, and almost all of it uses the variable i. There is no reason to use the letter i for a loop counter other than it is short for the word "index"). If programmers picked letters at random, we should see a wide distribution of letters (and possibly longer names) for loop counters. Yet the vast majority use i.

My conjecture is that current-day C# programmers were taught by C++ programmers, who used i. The C++ programmers were taught by C programmers, who were taught by FORTRAN programmers. Here we find something interesting.

In FORTRAN, the name for a variable was significant: it denoted the type of the variable. Variables with names beginning with the letters I, J, K, L, M, or N were integers; the rest were "reals" or floats. The syntax of the language did not allow for the declaration of variables. (At least not FORTRAN IV. Later versions of FORTRAN did allow for declarations.) "DO" loops (the FORTRAN equivalent of "for" loops) required an integer index variable.

Since FORTRAN syntax required specific variable names, programmers developed habits for variable names. The letters I, J, and K were used for loop counters.

The FORTRAN programmers developed the habits and later trained the C programmers. The C programmers adopted the habits and trained the C++ programmers ... and you know the rest. Modern-day programmers use the habits of FORTRAN, without thinking.

Variable names for loops are fairly harmless. But habits can extend beyond names. Yet we keep them in our heads. Here are some habits that we should re-think:

- Memory is expensive. Use the smallest memory structure possible.
- Re-use variables for different purposes.
- I/O is expensive. Cache values when possible.
- Use short names for variables. Shorter names means less to type! (Keypunch?)
- Changing a function is risky. Better to copy it and change the copy.
- Object-oriented programs are slow. Use procedural code.
- Java programs are slow. Stick with C or C++.
- Hardware is expensive and therefore will remain in place for a long time. Don't code for changes.
- Programmers are cheap. Save money by not spending on tools.

Habits can be helpful, but outdated habits can be harmful. It ain't what you don't know that hurts you, it's what you know that ain't so!

Don't get hurt!


Monday, July 12, 2010

Going rogue

The May/June issue of CODE magazine it used to stand for "COmponent DEveloper") breaks a precedent -- non-Microsoft technology. This is a big deal for the magazine.

CODE magazine started in 2000. It covered Microsoft and .NET technologies (also COM, DCOM, and the like) and in those first ten years it stayed neatly within the Microsoft world. A review of articles shows that it stayed completely within the Microsoft sphere, with a side trip to Crystal Reports. (Crystal Reports is bound tightly ti Windows and .NET, so I consider it well within the Microsoft environment.)

Things changed in 2009, with articles on Twitter and open source. I did not catch the articles; they could have been Microsoft-centered.

This issue (the May/June issue) has two articles on Ruby: one on Ruby on Rails and the other about Ruby, rake, and other Ruby tech. These articles are not specific to Microsoft and Windows and .NET; they apply to anyone using Ruby and Rails.

This is a big shift. CODE magazine has gone from a Microsoft cheerleader to a pragmatist, writing articles that it thinks will attract readers.

I've noticed one other change. I'm not sure when it happened, but Microsoft has stopped advertising in CODE magazine. I know of no relationship between EPS (the publisher behind CODE) and Microsoft, other than the arm's-length advertiser/publisher relationship. Yet perhaps there was pressure (or perceived pressure) to keep content in the Microsoft world, while Microsoft was paying some of the bills. Of course, when Microsoft stopped advertising, the pressure (real or perceived) would dissapate.

Is the appearance of tech-neutral articles driven by the absence of Microsoft advertising? I don't know. But I am glad to see them. The folks at CODE do a decent job of writing. There are too few resources for good technical information, and we should encourage the folks to do good work. CODE magazine is on that list.


Thursday, July 8, 2010

The Tyranny of Dates

The biggest problem I have with project planning software -- especially Microsoft Project -- is the date fields.

Microsoft Project (and other products) allow you to define a task, giving it a name and other attributes. You can specify precedent and dependent tasks, you can specify a duration, and you can assign people.

Project "helpfully" fills in the start date and end date for the task. For any task, there will be defined start and end dates. In this there is no question, no choice, no alternative.

My problem is that I don't work that way. Here's how I put together a project plan:

First I agree with the team on the objective. Then we list the major tasks -- the ones that we can list. We recognize that we will miss a few, depending on the project.

After agreeing on the tasks, we put them in sequence. We identify the dependencies.

Then we look at the effort for each one, estimating the duration of each task.

Finally with put dates in, which gives us our first draft of the project plan.

Sometimes this process is fast, and a single meeting can suffice. Sometimes this process takes weeks.

But until we get to the last phase, I don't want to see dates. Yet Microsoft Project insists on them. If we don't specify a date, it puts in some arbitrary value like January 1, 1980.

Microsoft Project has the notion of dates, but not null dates. Its as if the project manager configured the team with a 'ADD COLUMN START_DATE NONNULL, END_DATE NONNULL' script.

Worse, Project insists on precise dates. It accepts a specific date (and time, which can be deduced by scheduling some tasks for less than a ful 8-hour day). This is not how we handle our planning.

We need a date field that accepts the value "no date specified". We need a duration field that accepts the value "10 days, with a 20 percent chance of over or under by 2 days".

Using an arbitrary precise date or duration is not good enough. In fact, I consider it dangerous. Why not just plug in the value of 10 days? Because Project renders it as 10 days, and I have no way to indicate that it is a fuzzy value. I also want to look at the best case and worst case for all of the tasks, something that requires regeneration of the project schedule.

But the most dangerous thing about this behavior is that we have to bend to the software. We have to change our process to conform to software designed by someone else, probably someone who does not run projects.

I want software and computers to work for us, not the other way around.


Monday, July 5, 2010

Types of requirements

I am amused by the teams that divide their requirements into "functional" and "non-functional" groups.

Invariably it is in shops that used to have only functional requirements, and now have a second category.

The first group of requirements contains those specifications that are derived from business operations. They include things such as inventory levels, pricing functions, and billing terms.  The natural name for these specifications is "functional requirements" because, in the minds of the business analysts, there is a direct mapping from the software back to the business world.

The second group of requirements refer to  aspects of the system that are not tied to business operations. These can include things such as platforms, upgrades and updates, response times, system availability, and GUI design.

Frequently a development team starts with a single list of requirements, and then starts creating subgroups within the list. The non-functional requirements are usually groups together, since they don't fit in with any business initiatives. Sometimes the non-functional requirements have no official sponsors, or no sponsors from within the business.

The action of dividing requirements is a "lumping and splitting" problem. We humans associate like things and group them. We also split out unlike things. It's part of our psychology, and we should leverage it with software development and the management of requirements.

But the simple division of "functional" and "non-functional" is wrong. Even the term "non-functional" is a clue that one has given  little thought to the problem.

The term "non-functional" is an artificial one, akin to "manual transmission". (Prior to the introduction of the automatic transmission, a manual transmission was simply a transmission. No adjective was needed.) The term "non-functional" is needed to differentiate some requirements from the group of functional requirements.

The term "non-functional" is a negative term. It tells us what the requirement is not, but it does not state what the requirement *is*.

If you are using the term "non-functional requirements", then you are giving insufficient attention to those requirements. In addition, you implicitly make them second-class citizens in the requirements world.

Here is a list of terms that can replace "non-functional requirements":

- Performance requirements
- Maintainability requirements
- Availability requirements
- Upgrade requirements
- User interface requirements
- Business continuity requirements

The list is incomplete and perhaps not exactly what you need. Take the ones that make sense, and add others as you need.

Many folks will avoid using the specific terms and keep to "non-functional". The term is in place, everyone on the team agrees with it and understands it, and there is no clear benefit to these new terms. They would say that it makes no business sense to change.

And they will be right... until the list of non-functional requirements grows... and someone wants to split it.


Sunday, July 4, 2010

The question of value

Does source code have value? (And if it does, can we assign a monetary value to it?)

I'm talking about source code, not executable code.

Source code is not sold in stores or traded in a market. Without the market to guide us, we must devise other means to measure the value.

A simple measure is the total value of construction. Sum the costs of development: programmer salaries, testing, workstations and compilers, While a simple measure, it is wrong. Our history of projects shows us that many projects, some funded with millions of dollars, result in useless systems. The value of the result is not the value on the inputs.

The open source folks believe that source code has no value. Or rather, there is value in source code, and there is more value in sharing the source code than keeping the source code "under wraps".

I think that the value of source code can be related to the value of the executable. Yet that may not help. It's hard to assigned a value to the executable. Is Microsoft Word really worth the sticker price? (And what about Microsoft Internet Explorer? Since it is included in Windows, the price is effectively zero. And the price for Fireforx is clearly zero.)

An economist might say that the value of the software is its capabilities. In this sense, the value of Microsoft Word is not intrinsic to the code but in the number of .DOC files in your collection.

This leads to an interesting effect: When Word was the only word processor that could read .DOC files, it had a monopoly on the market, and a certain value. But when OpenOffice was released, with its abilities to read and write .DOC files, Word lost its monopoly and some of its value. (One no longer needed Word.) When Google Docs was released (and it can read and write .DOC files), the value of Word dropped again.

That analysis does not help us get closer to a monetary value, but it does show that the value changes. And if the value of the executable changes in response to external events, then I assert that the value of source code changes in response to external events.

Perhaps this idea is not so strange. Other things in life have values that respond to external events. Houses, cars, even tomatoes. Why not source code?

Such an idea may be received uncomfortably by some managers and accountants. The folks who assign value based on the input effort (and US tax law and accounting practices encourage this thinking) are poorly equipped to evaluate changes to the value based on external events.

Yet I think we must think of the value of source code, for the very simple reason that we must know when to abandon it.