Do we need a language to kick around?
It seems that we do. From the earliest days of computing, people have been critical of specific programming languages.
Those who had learned machine language and assembly code were skeptics of FORTRAN and horrified at the output of the COBOL compiler.
When I joined the field, BASIC, Pascal, and C were in the ascendant, yet each had their fans. In the microcomputer arena, BASIC was dominant and thus admired by many and despised by many (with some folks living in both camps). Pascal and C had their followers, and there were other languages for the explorers (CBASIC and Forth). The clear winner in the "most despised" race was BASIC.
In the golden age of Microsoft Windows, the dominant languages were Pascal (briefly), followed by C, and then a tussle between Visual Basic and C++. Both Visual Basic and C++ were liked and disliked, with strong loyalties.
Sun and Microsoft introduced Java and C#, which pulled people away from the Visual Basic and C++ arena and into a new, complex dispute. The argument of language superiority was clouded by the assets of the run-time system and the backing vendor. To this day, people have strong preferences for one over the other.
Today we see discussions, with new languages Scala, Clojure, and Lua compared to Python, Ruby, and Java. But these discussions are less heated and more educational. They are civilized discourse.
My theory is that we use languages as a proxy for independence, and our arguments are not about language or compiler but about our ability to survive. Using FORTRAN or COBOL meant committing to IBM (despite the portability of the languages), and people feared IBM.
In the microcomputer age, programming in BASIC meant committing to Microsoft, but the relationship was complex. Microsoft owned the language, but Digital Research owned CP/M (the de facto standard operating system), so we had two brutes to fear.
Now that Oracle has purchased Sun and acquired Java, I expect the Java/C# disputes to increase. Sun was the rebel alliance against the imperial Microsoft, but Oracle is a second empire. Both can threaten the independence of smaller organizations.
I also expect that more people will be kicking Java. Those who want independence will look at newer languages; those who want security will look to Java or C#. It may be a imagined security, since the vendor can pull the syntactical rug out from under your project at any time; consider changes in Visual Basic over time.
The new languages have no large empire behind them. (Yes, Scala and Clojure live in the Java run-time, but they are not viewed as tools of the empire.) With no bogeyman behind them, there is little reason to castigate them. They have little power over us.
It is the power of large vendors that we fear. So yes, as long as we have large vendors backing languages, there will be languages to kick around.
Saturday, July 9, 2011
Wednesday, July 6, 2011
The increasing cost of C++ and other old tech
If you are running a project that uses C++, you may want to think about its costs. As I see it, the cost of C++ is increasing.
The cost of compilers and libraries is remaining constant. (Whether you are using Microsoft's Visual Studio, the open source gcc, or a different toolset.)
The increasing cost is not in the tools, but in the people.
First and most obvious: the time to build applications in C++ is longer than the time to build applications in modern languages. (Notice how I have not-so-subtlely omitted C++ from the set of "modern" languages. But modern or not, programming in C++ takes more time.) This drives up your costs.
Recent college graduates don't learn C++; they learn Java or Python. You can hire recent graduates and ask them to learn C++, but I suspect few will want to work on C++ to any large degree. The programmers who know C++ are the more senior folks, programmers with more experience and higher salary expectations. This drives up your costs.
Not all senior folks admit to knowing C++. Some of my colleagues have removed C++ from their resume, because they want to work on projects with different languages. This removes them from the pool of talent, reducing the number of available C++ programmers. Finding programmers is harder and takes longer. This drives up your cost.
This affect is not limited to C++. Other "old tech" suffers the same fate. Think about COBOL, Visual Basic, Windows API programming, the Sybase database, Powerbuilder, and any of a number of older technologies. Each was popular in their heyday; each is still around but with a much-diminished pool of talent.
When technologies become "old", they become expensive. Eventually, they become so expensive that the product must either change to a different technology set or be discontinued.
As a product manager, how do you project your development and maintenance costs? Do you assume a flat cost model (perhaps with modest increases to match inflation)? Or do you project increasing costs as labor becomes scarce? Do you assume that a single technology will last the life of the product, or do you plan for a migration to a new technology?
Technologies become less popular over time. The assumption that a set of unchanging technology will carry a product over its entire life is naive -- unless your product life is shorter than the technology cycle. Effective project managers will plan for change.
The cost of compilers and libraries is remaining constant. (Whether you are using Microsoft's Visual Studio, the open source gcc, or a different toolset.)
The increasing cost is not in the tools, but in the people.
First and most obvious: the time to build applications in C++ is longer than the time to build applications in modern languages. (Notice how I have not-so-subtlely omitted C++ from the set of "modern" languages. But modern or not, programming in C++ takes more time.) This drives up your costs.
Recent college graduates don't learn C++; they learn Java or Python. You can hire recent graduates and ask them to learn C++, but I suspect few will want to work on C++ to any large degree. The programmers who know C++ are the more senior folks, programmers with more experience and higher salary expectations. This drives up your costs.
Not all senior folks admit to knowing C++. Some of my colleagues have removed C++ from their resume, because they want to work on projects with different languages. This removes them from the pool of talent, reducing the number of available C++ programmers. Finding programmers is harder and takes longer. This drives up your cost.
This affect is not limited to C++. Other "old tech" suffers the same fate. Think about COBOL, Visual Basic, Windows API programming, the Sybase database, Powerbuilder, and any of a number of older technologies. Each was popular in their heyday; each is still around but with a much-diminished pool of talent.
When technologies become "old", they become expensive. Eventually, they become so expensive that the product must either change to a different technology set or be discontinued.
As a product manager, how do you project your development and maintenance costs? Do you assume a flat cost model (perhaps with modest increases to match inflation)? Or do you project increasing costs as labor becomes scarce? Do you assume that a single technology will last the life of the product, or do you plan for a migration to a new technology?
Technologies become less popular over time. The assumption that a set of unchanging technology will carry a product over its entire life is naive -- unless your product life is shorter than the technology cycle. Effective project managers will plan for change.
Friday, July 1, 2011
Looking out of the Windows
According to this article in Computerworld, Microsoft is claiming success with Internet Explorer 9, in the form "the most popular modern browser on Windows 7", a phrase that excludes IE6 and IE7.
This is an awkward phrase to use to describe success. It's akin to the phrase "we're the best widget producer except for those other guys who produce widgets offshore".
I can think of two reasons for Microsoft to use such a phrase:
1) They want a phrase that shows that they are winning. Their phrasing, with all of the implied conditions, does indeed show that Microsoft is winning. But it is less than the simple "we're number one!".
2) Microsoft views success through Windows 7, or perhaps through currently marketed products. In this view, older products (Windows XP, IE6) don't count. And there may be some sense in that view, since Microsoft is in the business of selling software, and previously sold packages are nice but not part of this month's "P&L" statement.
The former, despite the dripping residue of marketing, is I think the healthier attitude. It is a focussed and specific measurement, sounds good, and pushes us to the line of deception without crossing it.
The latter is the more harmful. It is a self-deception, a measurement of what is selling today with no regard for the damage (or good) done yesterday, a willful ignorance of the effects of the company on the ecosystem.
The bottom line is that Internet Explorer is losing market share. This may not be the doom that it once was, with apps for iPhone and Android phones increasing in popularity. Indeed, the true danger may be in Microsoft's inability to build a market for Windows Phone 7 and their unwillingness to build apps for iOS and Android devices.
This is an awkward phrase to use to describe success. It's akin to the phrase "we're the best widget producer except for those other guys who produce widgets offshore".
I can think of two reasons for Microsoft to use such a phrase:
1) They want a phrase that shows that they are winning. Their phrasing, with all of the implied conditions, does indeed show that Microsoft is winning. But it is less than the simple "we're number one!".
2) Microsoft views success through Windows 7, or perhaps through currently marketed products. In this view, older products (Windows XP, IE6) don't count. And there may be some sense in that view, since Microsoft is in the business of selling software, and previously sold packages are nice but not part of this month's "P&L" statement.
The former, despite the dripping residue of marketing, is I think the healthier attitude. It is a focussed and specific measurement, sounds good, and pushes us to the line of deception without crossing it.
The latter is the more harmful. It is a self-deception, a measurement of what is selling today with no regard for the damage (or good) done yesterday, a willful ignorance of the effects of the company on the ecosystem.
The bottom line is that Internet Explorer is losing market share. This may not be the doom that it once was, with apps for iPhone and Android phones increasing in popularity. Indeed, the true danger may be in Microsoft's inability to build a market for Windows Phone 7 and their unwillingness to build apps for iOS and Android devices.
Monday, June 27, 2011
Run your business like an open source project
Businesses must make decisions about the different activities that they perform. They often classify activities as "drivers" and "supporting tasks" (or as "profit centers" and "cost centers"). Profit centers get funding; they are essential to the business and usually have a measurable return on investment (ROI). Cost centers get funding, grudgingly. They are considered a necessary evil and by definition they cannot provide ROI.
If you want to improve your ability to classify drivers and supporting utilities, look at open source projects. Not the software, but the projects that build the software. These projects often run with little or no funding, yet they succeed in delivering usable, quality software. They must be doing something right.
Any project that delivers usable, quality software is doing several things right. From setting objectives to managing resources, from marketing to coordinating activities, successful projects are... well, successful.
Open source projects are very good at identifying the drivers in their project, and focussing on them. They are also good at separating the supporting tasks, and either outsourcing them or automating them. Through outsourcing and automation, the project can drive the cost of supporting tasks to zero. Here, "outsourcing" does not mean "hire someone else" but "use external resources" which could be open web sites like Google Docs.
Open source projects use metrics to help them make decisions. They automate testing, which allows them to run tests often (and inexpensively). The results of the tests inform them of the quality of their product.
Open source projects don't think that they need special versions of everything. if a plain version control system is usable and lets them achieve their goals, they use it. If a plain compiler gets the job done, they use it. If allowing people to work around the world on schedules of their own choosing gets the job done, the project allows contributors to work around the world on their own schedule.
Very few open source projects -- successful open source projects -- can afford egos and custom stationery.
So are they all that different from a business?
Friday, June 24, 2011
The other disappearing papers
The internet age has brought about the demise of several newspapers, and threatens the entire industry. Many folks have blogged about this and even newspaper reporters have written stories about it.
But there is another set of papers that is threatened by the internet, a set of papers about which no one (to my knowledge) has worried, blogged, or reported.
That set of papers is the rental apartment advertising books.
You know these books. They live in flimsy (but apparently indestructably flimsy) boxes on streetcorners in downtown neighborhoods. The list page after page of beautiful apartment buildings, each nicer than the last.
At some point, the internet is going to eat these advertising books. As with newspapers, they are expensive to produce and distribute. Their demise is inevitable; it will be a simple business decision to ax them when they produce less revenue than cost.
Their disappearance will not cause hue and cry. There will be no period of mourning, no ceremony for their passing. They are, unlike newspapers, artifacts of pure advertising. They have no "content", in the internet/web sense.
These little booklets of advertisements are enablers of commerce, nothing more.
And when they are gone, it means that we have built other mechanisms to enable commerce, and people have accepted those other mechanisms. Those mechanisms might be web pages, or e-mail lists, or spiffy iPhone apps, or something else. But they will exist, for commerce will exist.
The demise of the paper advertisement will signal the acceptance of the on-line advertisement.
Monday, June 20, 2011
The parts are worth more than the whole
Some number of years ago, I would attend computer fairs. There was a particularly nice one near Trenton, NJ. The fair had a large flea market with people selling old, used hardware (and some software). Just about anything would be available, from PC parts to old eight-inch floppy disk drives to telephone equipment to minicomputer line printers... lots of things.
One of the most frustrating things was looking at old hardware that had been built for a specific system configuration. The device (whatever it was) would have various connectors; some for data and some for power. These connectors were not standard but custom, used only by that device (and the thing to which it attached).
Such custom connectors may have made sense -- perhaps there was no standard, or the standard did not meet the needs of the system. Or perhaps the manufacturer used a non-standard connector to lock customers into their hardware. Whatever the reason, the non-standard connectors effected the components when the system was decommissioned: they were unusable.
In contrast, the IBM PC used standard parts (the IBM PC set a new standard, but it was adopted by the industry) and parts, when a system was decommissioned, were still usable. When one upgraded from an IBM PC to an IBM PC-AT, one could swap in the old video and accessory cards. One could keep the display monitor, and the printer. The parts had value after the system lost its value.
Software systems have similar behavior. Software systems are built with components and layers (or so the pretty architecture diagrams lead us to believe) but these "components" often have the equivalent of the custom connectors of the old one-system-only hardware. Such custom components can be used within the system, but when the system is decommissioned, the components cannot be used elsewhere. Thus, when the system is decommissioned, the value of the components drops to zero. A component that can be used by one and only one system has no value outside of that system.
A good system architect will see not only the system but the components, and ensure that the components (as well as the system) have value. That means that the components must use standard APIs and be able to stand alone, so other systems can use them. A re-usable component must accept data in a reasonably neutral format, not one that is specific to a system or one that uses obscure object types. A re-usable component must run with a minimal number of dependencies, not the entire system. A re-usable component must be tested independently of the one system, to ensure that it can be run independently of the one system.
One of the most frustrating things was looking at old hardware that had been built for a specific system configuration. The device (whatever it was) would have various connectors; some for data and some for power. These connectors were not standard but custom, used only by that device (and the thing to which it attached).
Such custom connectors may have made sense -- perhaps there was no standard, or the standard did not meet the needs of the system. Or perhaps the manufacturer used a non-standard connector to lock customers into their hardware. Whatever the reason, the non-standard connectors effected the components when the system was decommissioned: they were unusable.
In contrast, the IBM PC used standard parts (the IBM PC set a new standard, but it was adopted by the industry) and parts, when a system was decommissioned, were still usable. When one upgraded from an IBM PC to an IBM PC-AT, one could swap in the old video and accessory cards. One could keep the display monitor, and the printer. The parts had value after the system lost its value.
Software systems have similar behavior. Software systems are built with components and layers (or so the pretty architecture diagrams lead us to believe) but these "components" often have the equivalent of the custom connectors of the old one-system-only hardware. Such custom components can be used within the system, but when the system is decommissioned, the components cannot be used elsewhere. Thus, when the system is decommissioned, the value of the components drops to zero. A component that can be used by one and only one system has no value outside of that system.
A good system architect will see not only the system but the components, and ensure that the components (as well as the system) have value. That means that the components must use standard APIs and be able to stand alone, so other systems can use them. A re-usable component must accept data in a reasonably neutral format, not one that is specific to a system or one that uses obscure object types. A re-usable component must run with a minimal number of dependencies, not the entire system. A re-usable component must be tested independently of the one system, to ensure that it can be run independently of the one system.
Building re-usable components helps retain value of software, and also retains value of people. Re-usable components can be used in other systems. (By definition.) Using components in other systems leverages the knowledge of the development team across multiple systems. In the long term, this can reduce your staffing costs.
Sunday, June 19, 2011
From the bottom up
Fixing software is hard. Experienced managers of development projects have memories (often painful memories) of projects to redesign a system. Redesign efforts are hard to justify, difficult to plan, and risk breaking code that works. All too often, a redesign project ends late, over budget, and with code that is just as hard to maintain. The result is the trifecta of project failures.
A big part of the failure is the approach to redesign. Projects to fix code often use one of the following approaches:
- Re-write the entire program (or system)
- Improve code along functional components (the "room by room" approach)
- Improve code as you modify it ("fix it as we make other changes")
Re-writing the system is expensive. Often too expensive to justify. ("You're going to spend how many dollars and how many months? And the result will be the same as what we have now?" ask the senior managers. And there is no good answer.) Plus, you have the risk of missing important functionality.
Changing the system in parts, akin to replacing tires on a car, is appealing but never seems to work. Components have too many connections to other components, and to truly fix a component one must fix the other components... and you're back to re-writing the entire system. (Except now you don't have the money or the budget for it.)
Fixing code as you make other changes (as you are implementing new functionality) doesn't work. It also has the "dependency effect" of the component approach, where you cannot fix code in one module without fixing code in a bunch of other modules.
Managers are poor at planning redesign projects, because they manage software like they manage the organization: from the top down. While top-down works for people, it does not work for code.
People organizations are, compared to software, fairly simple. Even a large company such as Wal-mart with its one million employees is small, compared to software projects of many million lines of code. Lines of code are not the same as people, but the measures are reasonable for comparison. Any line of code may "talk" with any other line of code; that is not true for most organizations.
People organizations are, compared to software, fairly smart. People are sentient beings and can act on their own. A corporate manager can issue directives to his reports and expect them to carry out orders. Software, on the other hand, cannot re-organize itself. Except for a few specialized AI systems, software must be guided.
I have worked on a number of projects. In each case, the only method that worked -- really worked -- was the "bottom up" approach.
In the bottom up approach, you start with the smallest components (functions, classes, whatever makes up your system), redesign them to make them "clean", and then move upwards. And to do this, you must know about your code.
You start with a survey the entire code. You need this survey to build a tree of the dependencies between functions and classes. Use the tree to identify the code at the bottom of the tree -- code that stands alone and does not depend on other code (except for standard libraries). That is the code to redesign first. Once you fix the bottom code, move up to the next layer of code -- code that depends only on the newly-cleaned code. Repeat this process until you reach the top.
Why the "bottom up" method for software?
Software is all about dependencies. Fixing software is more about changing dependencies than it is fixing code. Redesigning software changes the dependencies; improving code in the middle of a system will affect code above and below. Changes can affect more areas of the code, like ripples spreading across the surface of a lake. Starting at the bottom lets us change the code and worry about the ripples in only one direction.
Why not start at the top (with ripples in only one direction)? Changes at the top run the risk of leading us in the wrong direction. We think that we know how the software should be organized. But this impression is wrong. If we have learned anything from the past sixty years of software development, it is that we do not know, up front, how to organize software. If we start at the top, we will most likely pick the wrong design.
On the other hand, starting at the bottom lets us redesign the smallest components of the system, and leverage our knowledge of their current uses within the system. That extra knowledge lets us build better components. Once the smallest components are "clean", we can move to the next level of components and redesign them using knowledge of their use and clean subcomponents.
If anything, starting from the bottom reduces risk. By working on small components, a team can improve the code and make definite gains on the redesign of the system. The smaller objectives of bottom-up redesign are easier to accomplish and easier to measure. Once made, the changes are a permanent part of the system, easing future development efforts.
Software is not people; it does not act like people and it cannot be managed like people. Software is its own thing, and must be managed properly. Redesign projects from the top down are large, expensive, and risky. Projects from the bottom up have a better chance of success.
A big part of the failure is the approach to redesign. Projects to fix code often use one of the following approaches:
- Re-write the entire program (or system)
- Improve code along functional components (the "room by room" approach)
- Improve code as you modify it ("fix it as we make other changes")
Re-writing the system is expensive. Often too expensive to justify. ("You're going to spend how many dollars and how many months? And the result will be the same as what we have now?" ask the senior managers. And there is no good answer.) Plus, you have the risk of missing important functionality.
Changing the system in parts, akin to replacing tires on a car, is appealing but never seems to work. Components have too many connections to other components, and to truly fix a component one must fix the other components... and you're back to re-writing the entire system. (Except now you don't have the money or the budget for it.)
Fixing code as you make other changes (as you are implementing new functionality) doesn't work. It also has the "dependency effect" of the component approach, where you cannot fix code in one module without fixing code in a bunch of other modules.
Managers are poor at planning redesign projects, because they manage software like they manage the organization: from the top down. While top-down works for people, it does not work for code.
People organizations are, compared to software, fairly simple. Even a large company such as Wal-mart with its one million employees is small, compared to software projects of many million lines of code. Lines of code are not the same as people, but the measures are reasonable for comparison. Any line of code may "talk" with any other line of code; that is not true for most organizations.
People organizations are, compared to software, fairly smart. People are sentient beings and can act on their own. A corporate manager can issue directives to his reports and expect them to carry out orders. Software, on the other hand, cannot re-organize itself. Except for a few specialized AI systems, software must be guided.
I have worked on a number of projects. In each case, the only method that worked -- really worked -- was the "bottom up" approach.
In the bottom up approach, you start with the smallest components (functions, classes, whatever makes up your system), redesign them to make them "clean", and then move upwards. And to do this, you must know about your code.
You start with a survey the entire code. You need this survey to build a tree of the dependencies between functions and classes. Use the tree to identify the code at the bottom of the tree -- code that stands alone and does not depend on other code (except for standard libraries). That is the code to redesign first. Once you fix the bottom code, move up to the next layer of code -- code that depends only on the newly-cleaned code. Repeat this process until you reach the top.
Why the "bottom up" method for software?
Software is all about dependencies. Fixing software is more about changing dependencies than it is fixing code. Redesigning software changes the dependencies; improving code in the middle of a system will affect code above and below. Changes can affect more areas of the code, like ripples spreading across the surface of a lake. Starting at the bottom lets us change the code and worry about the ripples in only one direction.
Why not start at the top (with ripples in only one direction)? Changes at the top run the risk of leading us in the wrong direction. We think that we know how the software should be organized. But this impression is wrong. If we have learned anything from the past sixty years of software development, it is that we do not know, up front, how to organize software. If we start at the top, we will most likely pick the wrong design.
On the other hand, starting at the bottom lets us redesign the smallest components of the system, and leverage our knowledge of their current uses within the system. That extra knowledge lets us build better components. Once the smallest components are "clean", we can move to the next level of components and redesign them using knowledge of their use and clean subcomponents.
If anything, starting from the bottom reduces risk. By working on small components, a team can improve the code and make definite gains on the redesign of the system. The smaller objectives of bottom-up redesign are easier to accomplish and easier to measure. Once made, the changes are a permanent part of the system, easing future development efforts.
Software is not people; it does not act like people and it cannot be managed like people. Software is its own thing, and must be managed properly. Redesign projects from the top down are large, expensive, and risky. Projects from the bottom up have a better chance of success.
Subscribe to:
Posts (Atom)