I say "git" in the title of this post, but this is really about distributed version control systems (DVCS).
Git is easy to install and set up. It's easy to learn, and easy to use. (One can make the same claim of other programs, such as Mercurial.)
It's not the simply installation or operation that I find interesting about git. What I find interesting is the organization of the repositories.
Git (and possibly Mercurial and other DVCS packages) allows for a hierarchical collection of repositories. With a hierarchical arrangement, a project starts with a single repository, and then as people join the project they clone the original repository to form their own. They are the committers for their repositories, and the project owner remains the committer for the top-most repository. (This description is a gross over-simplification; there can be multiple committers and more interactions between project members. But bear with me.)
The traditional, "heavyweight" version control systems (PVCS, Visual SourceSafe, TFS) use a single repository. Projects that use these products tend to allow everyone on the project to check in changes -- there are no committers, no one specifically assigned to review changes and approve them. One can set policies to limit check-in privileges, although the mechanisms are clunky. One can set a policy to manually review all code changes, but the VCS provides no support for this policy -- it is enforced from the outside.
The hierarchical arrangement of multiple repositories aligns "commit" privileges with position in the organization. If you own a repository, you are responsible for changes; you are the committer. (Again, this is a simplification.)
Once you approve your changes, you can "send them up" to the next higher level of the repository hierarchy. Git supports this operation, bundling your changes and sending them automatically.
Git supports the synchronization of your repository with the rest of the organization, so you get changes made by others. You may have to resolve conflicts, but they would exist only in areas of the code in which you work.
The capabilities of distributed version control systems supports your organization. They align responsibility with position, requiring more responsibility with authority. (If you want to manage a large part of the code, you must be prepared to review changes for that code.) In contrast, the older version control systems provide nothing in the way of support, and sometimes require effort to manage the project as you would like.
This is a subtle difference, one that is not discussed. I suspect that there will be a quiet revolution, as projects move from the old tools to the new.
Friday, December 30, 2011
Saturday, December 17, 2011
The character of programming languages
Many languages use C-style blocks denoted with braces (the characters '{' and '}').
The BCPL programming language was the first language to use braces as part of its syntax. Earlier languages (notably COBOL, FORTRAN, algol, and LISP) did not use the brace characters.
Earlier languages did not use brace characters because the characters did not exist, at least not as defined characters. There was little in the way of standards for character sets, with each vendor (and sometimes each system) using its own character set. For a language to run on multiple computers, one had to limit the characters used in the language to those available on all planned platforms. Thus, FORTRAN uses uppercase letters and parentheses but not square brackets.
With the introduction of the ASCII and EBCDIC character sets, things changed. A standard character set (well, two standards) let one assume the existence of all of the defined characters.
First published in 1963, the character sets predate the effort to build BCPL in 1966. Thus, when BCPL was designed, the brace characters were present and ready to be used. They also have the virtue of not being used for anything before.
Our character sets define, to some extent, the syntax of our languages.
The BCPL programming language was the first language to use braces as part of its syntax. Earlier languages (notably COBOL, FORTRAN, algol, and LISP) did not use the brace characters.
Earlier languages did not use brace characters because the characters did not exist, at least not as defined characters. There was little in the way of standards for character sets, with each vendor (and sometimes each system) using its own character set. For a language to run on multiple computers, one had to limit the characters used in the language to those available on all planned platforms. Thus, FORTRAN uses uppercase letters and parentheses but not square brackets.
With the introduction of the ASCII and EBCDIC character sets, things changed. A standard character set (well, two standards) let one assume the existence of all of the defined characters.
First published in 1963, the character sets predate the effort to build BCPL in 1966. Thus, when BCPL was designed, the brace characters were present and ready to be used. They also have the virtue of not being used for anything before.
Our character sets define, to some extent, the syntax of our languages.
Monday, December 12, 2011
In open source, can there be more than one?
In the commercial market, multiple products is considered a healthy sign. The prevailing logic states that competition is a good thing, giving us the best possible product.
Vendors must position their products with a balance of different features and compatible operations. A word processor must provide some unique set of functions, but must provide some core set of functions to be considered a word processor. It must provide some compatibility to be considered useful to new customers (perhaps to convert their existing files). A bold, new approach to letter-writing, an approach that varies from the conventions of current products, will have a difficult time gaining acceptance. A word processor that performs the basic tasks of existing word processors, that is ten percent better at most things, and that offers a few new ideas, has a better chance of success. The commercial market allows for different, similar products.
The commercial market also has the risk of failure. Building a system on a product (say, a compiler or a version control system) builds in the risk of that product. Companies fail, and products are discontinued (even when the vendor succeeds). The user must choose carefully from the available products.
In the open source ecosystem, the dynamics are different. Multiple products (or projects) are not viewed as a necessity. Consider the popular open source solutions for different tasks: Linux, LibreOffice, GIMP, gcc, SendMail, and NFS. There are competing offerings for these functions, but the "market" has settled on these projects. The chances of a project replacing the Linux kernel, or the GIMP package, are low. (Although not zero, as LibreOffice recently replaced OpenOffice.)
Open source is not monolithic, nor is it limited to single solutions. There are competing ideas for scripting languages (Perl, Python, Ruby) and editors (vi and Emacs). There are competing ideas for databases (MySQL and PostGres, not to mention CouchDB).
I think that it is harder for an open source project to remain independent from the lead project than it is for a commercial product to remain independent from the market leader.
In open source, your ideas (and source code) are available. A small project that is mostly compatible with a large project can be absorbed into the large project. To remain independent, a project must remain different in some core aspect. The languages Perl, Python, and Ruby are all different. The editors vi and Emacs are different. Because of their differences, they can continue to exist as independent projects.
For most software functions, I believe that there is a "Highlander effect": there can be only one. There will be one wildly popular kernel, one wildly popular office suite, one wildly popular C++ compiler.
When there are "competing" open source projects, they will either eventually merge or eventually distance themselves (as with the case of vi and Emacs).
A popular open source project can "absorb" other, similar open source projects.
This effect will give a degree of stability to the ecosystem. One can build systems on top of the popular solutions. A system built with Linux, GNU utilities, gcc, and Python will endure for many years.
Vendors must position their products with a balance of different features and compatible operations. A word processor must provide some unique set of functions, but must provide some core set of functions to be considered a word processor. It must provide some compatibility to be considered useful to new customers (perhaps to convert their existing files). A bold, new approach to letter-writing, an approach that varies from the conventions of current products, will have a difficult time gaining acceptance. A word processor that performs the basic tasks of existing word processors, that is ten percent better at most things, and that offers a few new ideas, has a better chance of success. The commercial market allows for different, similar products.
The commercial market also has the risk of failure. Building a system on a product (say, a compiler or a version control system) builds in the risk of that product. Companies fail, and products are discontinued (even when the vendor succeeds). The user must choose carefully from the available products.
In the open source ecosystem, the dynamics are different. Multiple products (or projects) are not viewed as a necessity. Consider the popular open source solutions for different tasks: Linux, LibreOffice, GIMP, gcc, SendMail, and NFS. There are competing offerings for these functions, but the "market" has settled on these projects. The chances of a project replacing the Linux kernel, or the GIMP package, are low. (Although not zero, as LibreOffice recently replaced OpenOffice.)
Open source is not monolithic, nor is it limited to single solutions. There are competing ideas for scripting languages (Perl, Python, Ruby) and editors (vi and Emacs). There are competing ideas for databases (MySQL and PostGres, not to mention CouchDB).
I think that it is harder for an open source project to remain independent from the lead project than it is for a commercial product to remain independent from the market leader.
In open source, your ideas (and source code) are available. A small project that is mostly compatible with a large project can be absorbed into the large project. To remain independent, a project must remain different in some core aspect. The languages Perl, Python, and Ruby are all different. The editors vi and Emacs are different. Because of their differences, they can continue to exist as independent projects.
For most software functions, I believe that there is a "Highlander effect": there can be only one. There will be one wildly popular kernel, one wildly popular office suite, one wildly popular C++ compiler.
When there are "competing" open source projects, they will either eventually merge or eventually distance themselves (as with the case of vi and Emacs).
A popular open source project can "absorb" other, similar open source projects.
This effect will give a degree of stability to the ecosystem. One can build systems on top of the popular solutions. A system built with Linux, GNU utilities, gcc, and Python will endure for many years.
Sunday, December 11, 2011
Tradeoffs
It used to be that we had to write small, fast programs. Processors were slow, storage media (punch cards, tape drives, disc drives) were even slower, and memory was limited. In such a world, programmers were rewarded for tight code, and DP managers were rewarded for maintaining systems at utilization rates of ninety to ninety-five percent of machine capacity. The reason was that a higher rate meant that you needed more equipment, and a lower rate meant that you had purchased (or more likely, leased) too much equipment.
In that world, programmers had to make tradeoffs when creating systems. Readable code might not be fast, and fast code might not be readable (and often the two were true). Fast code won out over readable (slower) code. Small code that squeezed the most out of the hardware won out over readable (less efficient) code. The tradeoffs were reasonable.
The world has changed. Computers have become more powerful. Networks are faster and more reliable. Databases are faster, and we have multiple choices of database designs -- not everything is a flat file or a set of related tables. Equipment is cheap, almost commodities.
This change means that the focus of costs now shifts. Equipment is not the big cost item. CPU time is not the big cost item. Telecommunications is not the big cost item.
The big problem of application development, the big expense that concerns managers, the thing that will get attention, will be maintenance: the time and cost to modify or enhance an existing system.
The biggest factor in maintenance costs, in my mind, is the readability of the code. Readable code is easy to change (possibly). Opaque code is impossible to change (certainly).
Some folks look to documentation, such as design or architecture documents. I put little value in documentation; I have always found the code to be the final and most accurate description of the system. Documents suffer from aging: they were correct some but the system has been modified. Documents suffer from imprecision: they specify some but not all of the details. Documents suffer from inaccuracy: they specify what the author thought the system was doing, not what the system actually does.
Sometimes documentation can be useful. The business requirements of a system can be useful. But I find "System architecture" and "Design overview" documents useless.
If the code is to be the documentation for itself, then it must be readable.
Readability is a slippery concept. Different programmers have different ideas about "readability". What is readable to me may not be readable to you. Over my career, my ideas of readability have changed, as I learned new programming techniques (structured programming, object-oriented programming, functional programming), and even as I learned more about a language (my current ideas of "readable" C++ code are very different from my early ideas of "readable" C++ code).
I won't define readability. I will let each project decide on a meaningful definition of readability. I will list a few ideas that will let teams improve the readability of their code (however they define it).
Version control for source code A shop that is not using version control is not serious about software development. There are several reliable, well-documented and well supported, popular systems for version control. Version control lets multiple team members work together and coordinate their changes.
Automated builds An automated build lets you build the system reliably, consistently, and at low effort. You want the product for the customer to be built with a reliable and consistent method.
Any developer can build the system Developers need to build the system to run their tests. They need a reliable, consistent, low-effort, method to do that. And it has to work with their development environment, allowing them to change code and debug the system.
Automated testing Like version control, automated testing is necessary for a modern shop. You want to test the product before you send it to your customers, and you want the testing to be consistent and reliable. (You also want it easy to run.)
Any developer can test the system Developers need to know that their changes affect only the behaviors that they intend, and no other parts of the system. They need to use the tests to ensure that their changes have no unintended side-effects. Low-effort automated tests let them run the tests often.
Acceptance of refactoring To improve code, complicated classes and modules must be changed into sets of smaller, simpler classes and modules. Refactoring changes the code without changing the external behavior of the code. If I start with a system that passes its tests (automated tests, right?) and I refactor it, it should pass the same tests. When I can rearrange code, without changing the behavior, I can make the code more readable.
Incentives for developers to use all of the above Any project that discourages developers from using automated builds or automated tests, either explicitly or implicitly, will see little or no improvements in readability.
But the biggest technique for readable code is that the organization -- its developers and managers -- must want readable code. If the organization is more concerned with "delivering a quality product" or "meeting the quarterly numbers", then they will trade off readability for those goals.
In that world, programmers had to make tradeoffs when creating systems. Readable code might not be fast, and fast code might not be readable (and often the two were true). Fast code won out over readable (slower) code. Small code that squeezed the most out of the hardware won out over readable (less efficient) code. The tradeoffs were reasonable.
The world has changed. Computers have become more powerful. Networks are faster and more reliable. Databases are faster, and we have multiple choices of database designs -- not everything is a flat file or a set of related tables. Equipment is cheap, almost commodities.
This change means that the focus of costs now shifts. Equipment is not the big cost item. CPU time is not the big cost item. Telecommunications is not the big cost item.
The big problem of application development, the big expense that concerns managers, the thing that will get attention, will be maintenance: the time and cost to modify or enhance an existing system.
The biggest factor in maintenance costs, in my mind, is the readability of the code. Readable code is easy to change (possibly). Opaque code is impossible to change (certainly).
Some folks look to documentation, such as design or architecture documents. I put little value in documentation; I have always found the code to be the final and most accurate description of the system. Documents suffer from aging: they were correct some but the system has been modified. Documents suffer from imprecision: they specify some but not all of the details. Documents suffer from inaccuracy: they specify what the author thought the system was doing, not what the system actually does.
Sometimes documentation can be useful. The business requirements of a system can be useful. But I find "System architecture" and "Design overview" documents useless.
If the code is to be the documentation for itself, then it must be readable.
Readability is a slippery concept. Different programmers have different ideas about "readability". What is readable to me may not be readable to you. Over my career, my ideas of readability have changed, as I learned new programming techniques (structured programming, object-oriented programming, functional programming), and even as I learned more about a language (my current ideas of "readable" C++ code are very different from my early ideas of "readable" C++ code).
I won't define readability. I will let each project decide on a meaningful definition of readability. I will list a few ideas that will let teams improve the readability of their code (however they define it).
Version control for source code A shop that is not using version control is not serious about software development. There are several reliable, well-documented and well supported, popular systems for version control. Version control lets multiple team members work together and coordinate their changes.
Automated builds An automated build lets you build the system reliably, consistently, and at low effort. You want the product for the customer to be built with a reliable and consistent method.
Any developer can build the system Developers need to build the system to run their tests. They need a reliable, consistent, low-effort, method to do that. And it has to work with their development environment, allowing them to change code and debug the system.
Automated testing Like version control, automated testing is necessary for a modern shop. You want to test the product before you send it to your customers, and you want the testing to be consistent and reliable. (You also want it easy to run.)
Any developer can test the system Developers need to know that their changes affect only the behaviors that they intend, and no other parts of the system. They need to use the tests to ensure that their changes have no unintended side-effects. Low-effort automated tests let them run the tests often.
Acceptance of refactoring To improve code, complicated classes and modules must be changed into sets of smaller, simpler classes and modules. Refactoring changes the code without changing the external behavior of the code. If I start with a system that passes its tests (automated tests, right?) and I refactor it, it should pass the same tests. When I can rearrange code, without changing the behavior, I can make the code more readable.
Incentives for developers to use all of the above Any project that discourages developers from using automated builds or automated tests, either explicitly or implicitly, will see little or no improvements in readability.
But the biggest technique for readable code is that the organization -- its developers and managers -- must want readable code. If the organization is more concerned with "delivering a quality product" or "meeting the quarterly numbers", then they will trade off readability for those goals.
Labels:
best practices,
performance,
readability,
refactoring
Wednesday, November 30, 2011
Is "cheap and easy" a good thing?
In the IT industry, we are all about developing (and adopting) new techniques. The techniques often start as manual processes, often slow, expensive, and unreliable. We automate these processes, and eventually, the processes become cheap and easy. One would think that this path is a good thing.
But there is a dark spot.
Consider two aspects of software development: backups and version control.
More often than I like, I encounter projects that do not use a version control system. And many times, I encounter shops that have no process for creating backup copies of data.
In the early days of PCs, backups were expensive and consumed time and resources. The history of version control systems is similar. The earliest (primitive) systems were followed by (expensive) commercial solutions (that also consumed time and resources).
But the early objections to backups and version control no longer hold. There are solutions that are freely available, easy to use, easy to administer, and mostly automatic. Disk space and network connections are plentiful.
These solutions do require some effort and some administration. Nothing is completely free, or completely automatic. But the costs are significantly less than they were.
The resistance to version control is, then, only in the mindset of the project manager (or chief programmer, or architect, or whoever is running the show). If a project is not using version control, its because the project manager thinks that not using version control will be faster (or cheaper, or better) than using version control. If a shop is not making backup copies of important data, its because the manager thinks that not making backups is cheaper than making backups.
It is not enough for a solution to be cheap and easy. A solution has to be recognized as cheap and easy, and recognized as the right thing to do. The problem facing "infrastructure" items like backups and version control is that as they become cheap and easy, they also fade into the background. Solutions that "run themselves" require little in the way of attention from managers, who rightfully focus their efforts on running the business.
When solutions become cheap and easy (and reliable), they fall off of managers' radar. I suspect that few magazine articles talk about backup systems. (The ones that do probably discuss compliance with regulations for specific environments.) Today's articles on version control talk about the benefits of the new technologies (distributed version control systems), not the necessity of version control.
So here is the fading pain effect: We start with a need. We develop solutions, and make those tasks easier and more reliable, and we reduce the pain. As the pain is reduced, the visibility of the tasks drops. As the visibility drops, the importance assigned by managers drops. As the importance drops, fewer resources are assigned to the task. Resources are allocated to other, bigger pains. ("The squeaky wheel gets the grease.")
Beyond that, there seems to be a "window of awareness" for technical infrastructure solutions. When we invent techniques (version control, for example), there is a certain level of discussion and awareness of the techniques. As we improve the tools, the discussions become fewer, and at some point they live only in obscure corners of web forums. Shops that have adopted the techniques continue to use them, but shops that did not adopt the techniques have little impetus to adopt them, since they (the solutions) are no longer discussed.
So if you're a shop and you're "muddling through" with a manual solution (or no solution), you eventually stop getting the message that there are automated solutions. At this point, it is likely that you will never adopt the technology.
And this is why I think that "cheap and easy" may not always be a good thing.
But there is a dark spot.
Consider two aspects of software development: backups and version control.
More often than I like, I encounter projects that do not use a version control system. And many times, I encounter shops that have no process for creating backup copies of data.
In the early days of PCs, backups were expensive and consumed time and resources. The history of version control systems is similar. The earliest (primitive) systems were followed by (expensive) commercial solutions (that also consumed time and resources).
But the early objections to backups and version control no longer hold. There are solutions that are freely available, easy to use, easy to administer, and mostly automatic. Disk space and network connections are plentiful.
These solutions do require some effort and some administration. Nothing is completely free, or completely automatic. But the costs are significantly less than they were.
The resistance to version control is, then, only in the mindset of the project manager (or chief programmer, or architect, or whoever is running the show). If a project is not using version control, its because the project manager thinks that not using version control will be faster (or cheaper, or better) than using version control. If a shop is not making backup copies of important data, its because the manager thinks that not making backups is cheaper than making backups.
It is not enough for a solution to be cheap and easy. A solution has to be recognized as cheap and easy, and recognized as the right thing to do. The problem facing "infrastructure" items like backups and version control is that as they become cheap and easy, they also fade into the background. Solutions that "run themselves" require little in the way of attention from managers, who rightfully focus their efforts on running the business.
When solutions become cheap and easy (and reliable), they fall off of managers' radar. I suspect that few magazine articles talk about backup systems. (The ones that do probably discuss compliance with regulations for specific environments.) Today's articles on version control talk about the benefits of the new technologies (distributed version control systems), not the necessity of version control.
So here is the fading pain effect: We start with a need. We develop solutions, and make those tasks easier and more reliable, and we reduce the pain. As the pain is reduced, the visibility of the tasks drops. As the visibility drops, the importance assigned by managers drops. As the importance drops, fewer resources are assigned to the task. Resources are allocated to other, bigger pains. ("The squeaky wheel gets the grease.")
Beyond that, there seems to be a "window of awareness" for technical infrastructure solutions. When we invent techniques (version control, for example), there is a certain level of discussion and awareness of the techniques. As we improve the tools, the discussions become fewer, and at some point they live only in obscure corners of web forums. Shops that have adopted the techniques continue to use them, but shops that did not adopt the techniques have little impetus to adopt them, since they (the solutions) are no longer discussed.
So if you're a shop and you're "muddling through" with a manual solution (or no solution), you eventually stop getting the message that there are automated solutions. At this point, it is likely that you will never adopt the technology.
And this is why I think that "cheap and easy" may not always be a good thing.
Saturday, November 19, 2011
Programming and the fullness of time
Sometimes when writing code, the right thing to do is to wait for the technology to improve.
On a previous project, we had the challenge of (after the system had been constructed) making the programs run faster. Once the user saw the performance of the system, they wanted a faster version. It was a reasonable request, although the performance was sluggish, not horrible. The system was usable, it just wasn't "snappy".
So we set about devising a solution. We knew the code, and we knew that making the system faster would not be easy. Improved performance would require changing much of the code. Complicating the issue was the tool that we had used: a code-generation package that created a lot of the code for us. Once we started modifying the generated code, we could no longer use the generator. Or we could track all changes and apply them to later generated versions of the system. Neither path was appealing.
We debated various approaches, and the project management bureaucracy was such that we *could* debate various approaches without showing progress in code changes. That is, we could stall or "run the clock out".
It turned out that doing nothing was the right thing to do. By making no changes to the code, but simply waiting for PCs to become faster, the problem was solved.
So now we come to Google's Chromebook, the portable PC with only a browser.
One of the criticisms against the Chromebook is the lack of off-line capabilities for Google Docs. This is a fair criticism; the Chromebook is useful only when connected to the internet, and internet connections are not available everywhere.
Yet an off-line mode for Google Docs may be the wrong solution to the problem. The cost of developing such a solution is not trivial. Google might invest several months (with multiple people) developing and testing the off-line mode.
But what if networking becomes ubiquitous? Or at least more available? If that were to happen, then the need for off-line processing is reduced (if not eliminated). The solution to "how do I process documents when I am not connected" is solved not by creating a new solution, but by waiting for the surrounding technology to improve.
Google has an interesting decision ahead of them. They can build the off-line capabilities into their Docs applications. (I suspect it would require a fair amount of Javascript and hacks for storing large sets of data.) Or they can do nothing and hope that network coverage improves. (By "do nothing", I mean work on other projects.)
These decisions are easy to review in hindsight, they are cloudy on the front end. If I were them, I would be looking at the effort for off-line processing, the possible side benefits from that solution, and the rate of network coverage. Right now, I see no clear "winning" choice; no obvious solution that is significantly better than others. Which doesn't mean that Google should simply wait for network coverage to get better -- but it also means that Google shouldn't count that idea out.
On a previous project, we had the challenge of (after the system had been constructed) making the programs run faster. Once the user saw the performance of the system, they wanted a faster version. It was a reasonable request, although the performance was sluggish, not horrible. The system was usable, it just wasn't "snappy".
So we set about devising a solution. We knew the code, and we knew that making the system faster would not be easy. Improved performance would require changing much of the code. Complicating the issue was the tool that we had used: a code-generation package that created a lot of the code for us. Once we started modifying the generated code, we could no longer use the generator. Or we could track all changes and apply them to later generated versions of the system. Neither path was appealing.
We debated various approaches, and the project management bureaucracy was such that we *could* debate various approaches without showing progress in code changes. That is, we could stall or "run the clock out".
It turned out that doing nothing was the right thing to do. By making no changes to the code, but simply waiting for PCs to become faster, the problem was solved.
So now we come to Google's Chromebook, the portable PC with only a browser.
One of the criticisms against the Chromebook is the lack of off-line capabilities for Google Docs. This is a fair criticism; the Chromebook is useful only when connected to the internet, and internet connections are not available everywhere.
Yet an off-line mode for Google Docs may be the wrong solution to the problem. The cost of developing such a solution is not trivial. Google might invest several months (with multiple people) developing and testing the off-line mode.
But what if networking becomes ubiquitous? Or at least more available? If that were to happen, then the need for off-line processing is reduced (if not eliminated). The solution to "how do I process documents when I am not connected" is solved not by creating a new solution, but by waiting for the surrounding technology to improve.
Google has an interesting decision ahead of them. They can build the off-line capabilities into their Docs applications. (I suspect it would require a fair amount of Javascript and hacks for storing large sets of data.) Or they can do nothing and hope that network coverage improves. (By "do nothing", I mean work on other projects.)
These decisions are easy to review in hindsight, they are cloudy on the front end. If I were them, I would be looking at the effort for off-line processing, the possible side benefits from that solution, and the rate of network coverage. Right now, I see no clear "winning" choice; no obvious solution that is significantly better than others. Which doesn't mean that Google should simply wait for network coverage to get better -- but it also means that Google shouldn't count that idea out.
Sunday, November 13, 2011
Programming languages exist to make programming easy
We create programming languages to make programming easy.
After the invention of the electronic computer, we invented FORTRAN and COBOL. Both languages made the act of programming easy. (Easier than assembly language, the only other game in town.) FORTRAN made it easy to perform numeric computations, and despite the horror of its input/output methods, it also made it easier to read and write numerical values. COBOL also made it easy to perform computations and input/output operations; it was slanted towards structured data (records containing fields) and readability (longer variable names, and verbose language keywords).
After the invention of time-sharing (and a shortage of skilled programmers), we invented BASIC, a teaching language that linguistically sat between FORTRAN and COBOL.
After the invention of minicomputers (and the ability for schools and research groups to purchase them), we invented the C language, which combined structured programming concepts from Algol and Pascal with the low-level access of assembly language. The combination allowed researchers to connect computers to laboratory equipment and write efficient programs for processing data.
After the invention of graphics terminals and PCs, we invented Microsoft Windows and the Visual Basic language to program applications in Windows. The earlier languages of C and C++ made programming in Windows possible, but Visual Basic was the language that made it easy.
After PCs became powerful enough, we invented Java, which leverage the power to run interpreted byte-code programs, but also (and more significantly) handle threaded applications. Support for threading was built into the Java language.
With the invention of networking, we created HTML and web browsers and Javascript.
I have left out equipment (microcomputers with CP/M, the Xerox Alto, the Apple Macintosh) and languages (LISP, RPG, C#, and others). I'm looking at the large trend using a few data points. If your favorite computer or language is missing, please forgive my arbitrary selections.
We create languages to make tasks easier for ourselves. As we develop new hardware, larger data sets, and new ways of connecting data and devices, we need new languages to handle the capabilities of these new inventions.
Looking forward, what can we see? What new hardware will stimulate the creation of new languages?
Cloud computing is big, and will lead to creative solutions. We're already seeing new languages that have increased rigor in the form of functional programming. We moved from non-structured programming to structured programming to object-oriented programming; in the future I expect us to move to functional programming. Functional programming is a good fit for cloud computing, with its immutable objects and no-side-effect functions.
Mobile programming is popular, but I don't expect a language for mobile apps. Instead, I expect new languages for mobile devices. The Java, C#, and Objective-C languages (from Google, Microsoft, and Apple, respectively) will mutate into languages better suited to small, mobile devices that must run applications in a secure manner. I expect that security, not performance, will be the driver for change.
Big data is on the rise. We'll see new languages to handle the collection, synchronization, querying, and analysis of large data sets. The language 'Processing' is a start in that direction, letting us render data in a visual form. The invention of NoSQL databases is also a start; look for a 'NoSQL standard' language (or possibly several).
The new languages will allow us to handle new challenges. But that doesn't mean that the old languages will go away. Those languages were designed to handle specific challenges, and they handle them well. So well that new languages have not displaced them. (Billing systems are still in COBOL, scientists still use Fortran, and lots of Microsoft Windows applications are still running in Visual Basic.) New languages are optimized for different criteria and cannot always handle the older tasks; I would not want to write a billing system in C, for example.
As the 'space' of our challenges expands, we invent languages to fill that space. Let's invent some languages and meet some new challenges!
After the invention of the electronic computer, we invented FORTRAN and COBOL. Both languages made the act of programming easy. (Easier than assembly language, the only other game in town.) FORTRAN made it easy to perform numeric computations, and despite the horror of its input/output methods, it also made it easier to read and write numerical values. COBOL also made it easy to perform computations and input/output operations; it was slanted towards structured data (records containing fields) and readability (longer variable names, and verbose language keywords).
After the invention of time-sharing (and a shortage of skilled programmers), we invented BASIC, a teaching language that linguistically sat between FORTRAN and COBOL.
After the invention of minicomputers (and the ability for schools and research groups to purchase them), we invented the C language, which combined structured programming concepts from Algol and Pascal with the low-level access of assembly language. The combination allowed researchers to connect computers to laboratory equipment and write efficient programs for processing data.
After the invention of graphics terminals and PCs, we invented Microsoft Windows and the Visual Basic language to program applications in Windows. The earlier languages of C and C++ made programming in Windows possible, but Visual Basic was the language that made it easy.
After PCs became powerful enough, we invented Java, which leverage the power to run interpreted byte-code programs, but also (and more significantly) handle threaded applications. Support for threading was built into the Java language.
With the invention of networking, we created HTML and web browsers and Javascript.
I have left out equipment (microcomputers with CP/M, the Xerox Alto, the Apple Macintosh) and languages (LISP, RPG, C#, and others). I'm looking at the large trend using a few data points. If your favorite computer or language is missing, please forgive my arbitrary selections.
We create languages to make tasks easier for ourselves. As we develop new hardware, larger data sets, and new ways of connecting data and devices, we need new languages to handle the capabilities of these new inventions.
Looking forward, what can we see? What new hardware will stimulate the creation of new languages?
Cloud computing is big, and will lead to creative solutions. We're already seeing new languages that have increased rigor in the form of functional programming. We moved from non-structured programming to structured programming to object-oriented programming; in the future I expect us to move to functional programming. Functional programming is a good fit for cloud computing, with its immutable objects and no-side-effect functions.
Mobile programming is popular, but I don't expect a language for mobile apps. Instead, I expect new languages for mobile devices. The Java, C#, and Objective-C languages (from Google, Microsoft, and Apple, respectively) will mutate into languages better suited to small, mobile devices that must run applications in a secure manner. I expect that security, not performance, will be the driver for change.
Big data is on the rise. We'll see new languages to handle the collection, synchronization, querying, and analysis of large data sets. The language 'Processing' is a start in that direction, letting us render data in a visual form. The invention of NoSQL databases is also a start; look for a 'NoSQL standard' language (or possibly several).
The new languages will allow us to handle new challenges. But that doesn't mean that the old languages will go away. Those languages were designed to handle specific challenges, and they handle them well. So well that new languages have not displaced them. (Billing systems are still in COBOL, scientists still use Fortran, and lots of Microsoft Windows applications are still running in Visual Basic.) New languages are optimized for different criteria and cannot always handle the older tasks; I would not want to write a billing system in C, for example.
As the 'space' of our challenges expands, we invent languages to fill that space. Let's invent some languages and meet some new challenges!
Thursday, November 10, 2011
The insignificance of significant figures in programming languages
If a city with a population figure of 500,000 gets three more residents, the population figure is... 500,000, not 500,003. The reasoning is this: the original figure was accurate only to the first digit (the hundred-thousands digit). It has a finite precision, and adding a number that is smaller than the precision has no affect on the original number.
Significant figures is not the same as "number of decimal places", although many people do confuse the two.
Significant figures are needed for calculations with measured quantities. Measurements will have some degree of imprecision, and the rigor of significant figures keeps our calculations honest. The rules for significant figures are more complex (and subtle) than a simple "use 3 decimal places". The number of decimal places will vary, and some calculations may affect positions to the left of the decimal point. (As in our "city with 500,000 residents" example.)
For a better description of significant figures, see the wikipedia page.
Applications such as Microsoft Excel (or LibreOffice Calc) have no built-in support for significant figures. Nor, to my knowledge, are there plug-ins or extensions to support calculations with significant figures.
Perhaps the lack of support for significant figures is caused by a lack of demand. Most spreadsheets are built to handle money, which is counted (not measured) and therefore does not fall under the domain of significant figures. (Monetary figures are considered to be exact, in most applications.)
Perhaps the lack of support is driven by the confusion between "decimal places" and "significant figures".
But perhaps the biggest reason for a lack of support for significant figures in applications is this: There is no support for significant figures in popular languages.
A quick search for C++, Java, Python, and Ruby yield no such corresponding packages. Interestingly, the only language that had a package for significant figures was Perl: CPAN has the Math::SigFigs package.
So the better question is: Why do programming languages have no support for significant figures?
Significant figures is not the same as "number of decimal places", although many people do confuse the two.
Significant figures are needed for calculations with measured quantities. Measurements will have some degree of imprecision, and the rigor of significant figures keeps our calculations honest. The rules for significant figures are more complex (and subtle) than a simple "use 3 decimal places". The number of decimal places will vary, and some calculations may affect positions to the left of the decimal point. (As in our "city with 500,000 residents" example.)
For a better description of significant figures, see the wikipedia page.
Applications such as Microsoft Excel (or LibreOffice Calc) have no built-in support for significant figures. Nor, to my knowledge, are there plug-ins or extensions to support calculations with significant figures.
Perhaps the lack of support for significant figures is caused by a lack of demand. Most spreadsheets are built to handle money, which is counted (not measured) and therefore does not fall under the domain of significant figures. (Monetary figures are considered to be exact, in most applications.)
Perhaps the lack of support is driven by the confusion between "decimal places" and "significant figures".
But perhaps the biggest reason for a lack of support for significant figures in applications is this: There is no support for significant figures in popular languages.
A quick search for C++, Java, Python, and Ruby yield no such corresponding packages. Interestingly, the only language that had a package for significant figures was Perl: CPAN has the Math::SigFigs package.
So the better question is: Why do programming languages have no support for significant figures?
Tuesday, November 8, 2011
Advertisements in the brave new world of Swipeville
We've seen the different types of ads in Webland: banner ads, side-bar ads, in-text ads, pop-up ads, pop-under ads... all of the types.
My favorite are the "your page is loading" ads which block the (completely loaded, who do you think you are kidding) content page. I like them because, with multi-tab browsers, I can view a different page while the ad-covered page is "loading" while the advertisement times out. With multiple tabs, I can avoid the delay and essentially skip the advertisement.
This all changes in the world of phones and tablets. (I call this new world "Swipeville".) Classic desktop browsers gave us tabbed windows; the new platforms do not. The phone and tablet browsers have one window and one "tab", much like the early desktop browsers.
In this world, we cannot escape advertisements by using multiple tabs. Nor can we look at another window (such as another application) while our page "loads". Since apps own the entire screen, they are either running or not -- switching to another app means that the browser stops, and switching back re-runs the page load/draw operation.
Which means that advertisements will be less avoidable, and therefore (possibly) more effective.
Or they may be less effective; the psychology of cell phone ads is, I think, poorly understood. Regardless of effectiveness, we will be seeing more of them.
My favorite are the "your page is loading" ads which block the (completely loaded, who do you think you are kidding) content page. I like them because, with multi-tab browsers, I can view a different page while the ad-covered page is "loading" while the advertisement times out. With multiple tabs, I can avoid the delay and essentially skip the advertisement.
This all changes in the world of phones and tablets. (I call this new world "Swipeville".) Classic desktop browsers gave us tabbed windows; the new platforms do not. The phone and tablet browsers have one window and one "tab", much like the early desktop browsers.
In this world, we cannot escape advertisements by using multiple tabs. Nor can we look at another window (such as another application) while our page "loads". Since apps own the entire screen, they are either running or not -- switching to another app means that the browser stops, and switching back re-runs the page load/draw operation.
Which means that advertisements will be less avoidable, and therefore (possibly) more effective.
Or they may be less effective; the psychology of cell phone ads is, I think, poorly understood. Regardless of effectiveness, we will be seeing more of them.
Sunday, November 6, 2011
Picking a programming language
In the great, ongoing debate of language superiority, many factors are considered ... and brandished. The discussions of languages are sometimes heated. My purpose here is to provide some musings in a cool light.
The popular languages of the day are (in order provided by Tiobe Software): Java, C, C++, PHP, C#, Objective C, Visual Basic, Python, Perl, and Javascript.
But instead of arguing about the sequence of these languages (or even other candidates for inclusion), let's look at the attributes that make languages popular. Here's a list of some considerations:
How well do languages match these criteria? Let's try some free association.
For performance, the language of choice is C++. Some might argue that Objective-C provides better performance, but I think the argument would come only from developers in the OSX and iOS platforms.
Readability is a difficult notion, and subject to a lot of, well, subjectivity. My impression is that most programmers will claim that their favorite language is eminently readable, if only one takes the time to learn it. To get around this bias, I propose that people will pick as second-best in readability the language Python, and I choose that as the most readable language.
I submit that reliability among languages is a neutral item. Compilers and interpreters for all of these languages are quite good, and programs perform -- for the most part -- consistently.
For cost, all of these languages are available in no-cost options. There are commercial versions for C# (Microsoft's Visual Studio) and Objective-C (Apple's developer kit), and one would think that such costs would give boosts to the other languages. And it does, but cost alone is not enough to "make" or "break" a language. Which brings us to market support.
The support of Microsoft and Apple for C# and Objective-C make those languages appealing. The Microsoft tools have a lot of followers: companies that specify them as standards and developers who know and keep active in the C# language.
Peering into the future, what can we see?
I think that the Perl/Python tussle will end up going to Python. Right now, Perl has better market support: the CPAN libraries and lots of developers. These factors can change, and are changing. O'Reilly has been printing (and selling) lots of books on Python. People have been starting projects in Python. In contrast, Perl loses on readability, something that is hard to change.
The Java/C# tussle is more about market support and less about readability and performance. These languages are about the same in readability, performance, and reliability. Microsoft has made C# the prince of languages for development in Windows; we need to see what Oracle will do with Java.
Apple had designated Objective-C, C, and C++ as the only languages suitable for iOS, but is relaxing their rules. I expect some change in the popularity of iOS programming languages.
But what about those other popular languages, the ones I have not mentioned? What about C, Visual Basic, PHP, and Javascript? Each have their fanbase (companies and developers) and each have a fair rating in performance, reliability, and market support.
I expect that Javascript will become more popular, continuing the current trend. The others I think will fade gradually. Expect to see less market support (fewer books, fewer updates to tools) and more conversion projects (from Visual Basic to C#, for example). But also expect a long life from these languages. The old languages of Fortran and COBOL are still with us.
Which language you pick for your project is a choice that you should make consciously. You must weigh many factors -- more than are listed here -- and live with the consequences of that decision. I encourage you to think of these factors, think of other factors, and discuss them with your colleagues.
The popular languages of the day are (in order provided by Tiobe Software): Java, C, C++, PHP, C#, Objective C, Visual Basic, Python, Perl, and Javascript.
But instead of arguing about the sequence of these languages (or even other candidates for inclusion), let's look at the attributes that make languages popular. Here's a list of some considerations:
- Platform: which platforms (Windows, OSX, iOS, Android, Linux) support the language
- Performance: how well the programs perform at run-time (whether compiled or interpreted)
- Readability: how well programs written by programmers can be read by other programmers
- Reliability: how consistently the written programs perform
- Cost: here I mean direct costs: the cost of the compiler and tools (and ongoing costs for support and licenses)
- Market support: how much support is available from vendors, groups, and developers
How well do languages match these criteria? Let's try some free association.
For performance, the language of choice is C++. Some might argue that Objective-C provides better performance, but I think the argument would come only from developers in the OSX and iOS platforms.
Readability is a difficult notion, and subject to a lot of, well, subjectivity. My impression is that most programmers will claim that their favorite language is eminently readable, if only one takes the time to learn it. To get around this bias, I propose that people will pick as second-best in readability the language Python, and I choose that as the most readable language.
I submit that reliability among languages is a neutral item. Compilers and interpreters for all of these languages are quite good, and programs perform -- for the most part -- consistently.
For cost, all of these languages are available in no-cost options. There are commercial versions for C# (Microsoft's Visual Studio) and Objective-C (Apple's developer kit), and one would think that such costs would give boosts to the other languages. And it does, but cost alone is not enough to "make" or "break" a language. Which brings us to market support.
The support of Microsoft and Apple for C# and Objective-C make those languages appealing. The Microsoft tools have a lot of followers: companies that specify them as standards and developers who know and keep active in the C# language.
Peering into the future, what can we see?
I think that the Perl/Python tussle will end up going to Python. Right now, Perl has better market support: the CPAN libraries and lots of developers. These factors can change, and are changing. O'Reilly has been printing (and selling) lots of books on Python. People have been starting projects in Python. In contrast, Perl loses on readability, something that is hard to change.
The Java/C# tussle is more about market support and less about readability and performance. These languages are about the same in readability, performance, and reliability. Microsoft has made C# the prince of languages for development in Windows; we need to see what Oracle will do with Java.
Apple had designated Objective-C, C, and C++ as the only languages suitable for iOS, but is relaxing their rules. I expect some change in the popularity of iOS programming languages.
But what about those other popular languages, the ones I have not mentioned? What about C, Visual Basic, PHP, and Javascript? Each have their fanbase (companies and developers) and each have a fair rating in performance, reliability, and market support.
I expect that Javascript will become more popular, continuing the current trend. The others I think will fade gradually. Expect to see less market support (fewer books, fewer updates to tools) and more conversion projects (from Visual Basic to C#, for example). But also expect a long life from these languages. The old languages of Fortran and COBOL are still with us.
Which language you pick for your project is a choice that you should make consciously. You must weigh many factors -- more than are listed here -- and live with the consequences of that decision. I encourage you to think of these factors, think of other factors, and discuss them with your colleagues.
Tuesday, November 1, 2011
Mobile first, desktop second (maybe)
Mobile computing has arrived, and is no longer a second-class citizen. In fact, it is the desktop that may be the second-class application.
A long time ago, desktop applications were the only game in town. Then mobile arrived, and it was granted a small presence: usually m.whatever.com, with some custom scripts to generate a limited set of web pages.
Now, the mobile app is the leader. If you are starting a project, start with mobile, and if you have time, build in the "plain" version later. Focus on your customers; for new apps, customers are mobile devices: iPhones, iPads, Android phones, and tablets. You can add the desktop browser version later, after you get the core running.
A long time ago, desktop applications were the only game in town. Then mobile arrived, and it was granted a small presence: usually m.whatever.com, with some custom scripts to generate a limited set of web pages.
Now, the mobile app is the leader. If you are starting a project, start with mobile, and if you have time, build in the "plain" version later. Focus on your customers; for new apps, customers are mobile devices: iPhones, iPads, Android phones, and tablets. You can add the desktop browser version later, after you get the core running.
Monday, October 31, 2011
Bring your own device
The typical policy for corporate networks is simple: corporation-supplied equipment is allowed, and everything else is forbidden. Do not attach your own computers or cell phones, do not connect your own tablet computers, do not plug in your own thumb drives. Only corporate-approved (and corporate-supplied) equipment is allowed, because that enables security.
The typical policy for corporate networks is changing.
This change has been brought about by reality. Corporations cannot keep up with the plethora of devices available (iPods, iPads, Android phones, tablets, ... what have you...) but must improve efficiency of their employees. New devices improve that efficiency.
In the struggle between security and efficiency... the winner is efficiency.
IBM is allowing employees to attach their own equipment to the corporate network. This makes sense for IBM, since they advise other companies in the effective use of resources. IBM *has* to make this work, in order for them to retain credibility. After all, if IBM cannot make this work, they cannot counsel other companies and advise that those companies open their networks to employee-owned equipment.
Non-consulting corporations (that is, most corporations) don't have the pressure to make this change. They can choose to keep their networks "pure" and free from non-approved equipment.
For a while.
Instead of marketing pressure, companies will face pressure from within. It will come from new hires, who expect to use their smartphones and tablets. It will come from "average" employees, who want to use readily-available equipment to get the job done.
More and more, people within the company will question the rules put in place by the IT group, rules that limit their choices of hardware.
And once "alien" hardware is approved, software will follow. At first, the software will be the operating systems and closely-bound utilities (Mac OSX and iTunes, for example). Eventually, the demand for other utilities (Google Docs, Google App Engine, Python) will overwhelm the IT forces holding back the tide.
IT can approach this change with grace, or with resistance. But face it they will, and adjust to it they must.
The typical policy for corporate networks is changing.
This change has been brought about by reality. Corporations cannot keep up with the plethora of devices available (iPods, iPads, Android phones, tablets, ... what have you...) but must improve efficiency of their employees. New devices improve that efficiency.
In the struggle between security and efficiency... the winner is efficiency.
IBM is allowing employees to attach their own equipment to the corporate network. This makes sense for IBM, since they advise other companies in the effective use of resources. IBM *has* to make this work, in order for them to retain credibility. After all, if IBM cannot make this work, they cannot counsel other companies and advise that those companies open their networks to employee-owned equipment.
Non-consulting corporations (that is, most corporations) don't have the pressure to make this change. They can choose to keep their networks "pure" and free from non-approved equipment.
For a while.
Instead of marketing pressure, companies will face pressure from within. It will come from new hires, who expect to use their smartphones and tablets. It will come from "average" employees, who want to use readily-available equipment to get the job done.
More and more, people within the company will question the rules put in place by the IT group, rules that limit their choices of hardware.
And once "alien" hardware is approved, software will follow. At first, the software will be the operating systems and closely-bound utilities (Mac OSX and iTunes, for example). Eventually, the demand for other utilities (Google Docs, Google App Engine, Python) will overwhelm the IT forces holding back the tide.
IT can approach this change with grace, or with resistance. But face it they will, and adjust to it they must.
Wednesday, October 26, 2011
Small is the new big thing
Applications are big, out of necessity. Apps are small, and should be.
Applications are programs that do everything you need. Microsoft Word and Microsoft Excel are applications: They let you compose documents (or spreadsheets), manipulate them, and store them. Visual Studio is an application: It lets you compose programs, compile them, and test them. Everything you need is baked into the application, except for the low-level functionality provided by the operating system.
Apps, in contrast, contain just enough logic to get the desired data and present it to the user.
A smartphone app is not a complete application; except for the most trivial of programs, it is the user interface to an application.
The Facebook app is a small program that talks to Facebook servers and presents data. Twitter apps talk to the Twitter servers. The New York Times talks to their servers. Simple apps such as a calculator app or rudimentary games can run without back-ends, but I suspect that popular games like "Angry Birds" store data on servers.
Applications contained everything: core logic, user interface, and data storage. Apps are components in a larger system.
We've seen distributed systems before: client-server systems and web applications divide data storage and core logic from user interface and validation logic. These application designs allowed for a single front-end; current system design allows for multiple user interfaces: iPhone, iPad, Android, and web. Multiple front ends are necessary; there is no clear leader, no "IBM PC" standard.
To omit a popular platform is to walk away from business.
Small front ends are better than large front ends. A small, simple front end can be ported quickly to new platforms. It can be updated more rapidly, to stay competitive. Large, complex apps can be ported to new platforms, but as with everything else, a large program requires more effort to port.
Small apps allow a company to move quickly to new platforms.
With a dynamic market of user interface devices, an effective company must adopt new platforms or face reduced revenue. Small user interfaces (apps) allow a company to quickly adopt new platforms.
If you want to succeed, think small.
Applications are programs that do everything you need. Microsoft Word and Microsoft Excel are applications: They let you compose documents (or spreadsheets), manipulate them, and store them. Visual Studio is an application: It lets you compose programs, compile them, and test them. Everything you need is baked into the application, except for the low-level functionality provided by the operating system.
Apps, in contrast, contain just enough logic to get the desired data and present it to the user.
A smartphone app is not a complete application; except for the most trivial of programs, it is the user interface to an application.
The Facebook app is a small program that talks to Facebook servers and presents data. Twitter apps talk to the Twitter servers. The New York Times talks to their servers. Simple apps such as a calculator app or rudimentary games can run without back-ends, but I suspect that popular games like "Angry Birds" store data on servers.
Applications contained everything: core logic, user interface, and data storage. Apps are components in a larger system.
We've seen distributed systems before: client-server systems and web applications divide data storage and core logic from user interface and validation logic. These application designs allowed for a single front-end; current system design allows for multiple user interfaces: iPhone, iPad, Android, and web. Multiple front ends are necessary; there is no clear leader, no "IBM PC" standard.
To omit a popular platform is to walk away from business.
Small front ends are better than large front ends. A small, simple front end can be ported quickly to new platforms. It can be updated more rapidly, to stay competitive. Large, complex apps can be ported to new platforms, but as with everything else, a large program requires more effort to port.
Small apps allow a company to move quickly to new platforms.
With a dynamic market of user interface devices, an effective company must adopt new platforms or face reduced revenue. Small user interfaces (apps) allow a company to quickly adopt new platforms.
If you want to succeed, think small.
Monday, October 24, 2011
Steve Jobs, Dennis Ritchie, John McCarthy, and Daniel McCracken
We lost four significant people from the computing world this year.
Steve Jobs needed no introduction. Everyone new him as that slightly crazy guy from Apple, the one who would show off new products while always wearing a black mock-turtleneck shirt.
Dennis Ritchie was well-known by the geeks. Articles comparing him to Steve Jobs were wrong: Ritchie co-created Unix and C somewhat before Steve Jobs founded Apple. Many languages (C++, Java, C#) are descendants of C. Linux, Android, Apple iOS, and Apple OSX are descendants of Unix.
John McCarthy was know by the true geeks. He built a lot of AI, and created a language called LISP. Modern languages (Python, Ruby, Scala, and even C# and C++) are beginning to incorporate ideas from the LISP language.
Daniel McCracken is the unsung hero of the group. He is unknown even among true geeks. His work predates the others (except McCarthy), and had a greater influence on the industry than possibly all of them. McCracken wrote books on FORTRAN and COBOL, books that were understandable and comprehensive. He made it possible for the very early programmers to learn their craft -- not just the syntax but the craft of programming.
The next time you write a "for" loop with the control variable named "i", or see a "for" loop with the control variable named "i", you can thank Daniel McCracken. It was his work that set that convention and taught the first set of programmers.
Steve Jobs needed no introduction. Everyone new him as that slightly crazy guy from Apple, the one who would show off new products while always wearing a black mock-turtleneck shirt.
Dennis Ritchie was well-known by the geeks. Articles comparing him to Steve Jobs were wrong: Ritchie co-created Unix and C somewhat before Steve Jobs founded Apple. Many languages (C++, Java, C#) are descendants of C. Linux, Android, Apple iOS, and Apple OSX are descendants of Unix.
John McCarthy was know by the true geeks. He built a lot of AI, and created a language called LISP. Modern languages (Python, Ruby, Scala, and even C# and C++) are beginning to incorporate ideas from the LISP language.
Daniel McCracken is the unsung hero of the group. He is unknown even among true geeks. His work predates the others (except McCarthy), and had a greater influence on the industry than possibly all of them. McCracken wrote books on FORTRAN and COBOL, books that were understandable and comprehensive. He made it possible for the very early programmers to learn their craft -- not just the syntax but the craft of programming.
The next time you write a "for" loop with the control variable named "i", or see a "for" loop with the control variable named "i", you can thank Daniel McCracken. It was his work that set that convention and taught the first set of programmers.
Labels:
Apple,
books,
C,
COBOL,
Daniel McCracken,
Dennis Ritchie,
Fortran,
John McCarthy,
LISP,
steve jobs,
Unix,
unsung hero
Sunday, October 23, 2011
Functional programming pays off (part 2)
We continue to gain from our use of functional programming techniques.
Using just the "immutable object" technique, we've improved our code and made our programming lives easier. Immutable objects have given us two benefits this week.
The first benefit: less code. We revised our test framework to use immutable objects. Rather than instantiating a test object (which exercises the true object under test) and asking it to run the tests, we now instantiate the test object and it runs the tests immediately. We then simply ask it for the results. Our new code is simpler than before, and contains fewer lines of code.
The second benefit: we can extract classes from one program and add them to another -- and do it easily. This is a big win. Often (too often), extracting a class from one program is difficult, because of dependencies and side effects. The one class requires other classes, not just direct dependencies but classes "to the side" and "above" in order to function. In the end, one must import most of the original system!
With immutable objects, we have eliminated side effects. Our code has no "side" or "above" dependencies, and has fewer direct dependencies. Thus, it is much easier for us to move a class from one program into another.
We took advantage of both of these effects this week, re-organizing our code. We were productive because our code used immutable objects.
Using just the "immutable object" technique, we've improved our code and made our programming lives easier. Immutable objects have given us two benefits this week.
The first benefit: less code. We revised our test framework to use immutable objects. Rather than instantiating a test object (which exercises the true object under test) and asking it to run the tests, we now instantiate the test object and it runs the tests immediately. We then simply ask it for the results. Our new code is simpler than before, and contains fewer lines of code.
The second benefit: we can extract classes from one program and add them to another -- and do it easily. This is a big win. Often (too often), extracting a class from one program is difficult, because of dependencies and side effects. The one class requires other classes, not just direct dependencies but classes "to the side" and "above" in order to function. In the end, one must import most of the original system!
With immutable objects, we have eliminated side effects. Our code has no "side" or "above" dependencies, and has fewer direct dependencies. Thus, it is much easier for us to move a class from one program into another.
We took advantage of both of these effects this week, re-organizing our code. We were productive because our code used immutable objects.
Wednesday, October 19, 2011
Engineering vs. craft
Some folks consider the development of software to be a craft; others claim that it is engineering.
As much as I would like for software development to be engineering, I consider it a craft.
Engineering is a craft that must work within measurable constraints, and must optimize some measurable attributes. For example, bridges must support a specific, measurable load, and minimize the materials used in construction (again, measurable quantities).
We do not do this for software.
We manage not software but software development. That is, we measure the cost and time of the development effort, but we do not measure the software itself. (The one exception is measuring the quality of the software, but that is a difficult measurement and we usually measure the number and severity of defects, which is a negative measure.)
If we are to engineer software, then we must measure the software. (We can -- and should -- measure the development effort. Those are necessary measurements. But they are not, by themselves, sufficient for engineering.)
What can we measure in software? Here are some suggestions:
- Lines of code
- Number of classes
- Number of methods
- Average size of classes
- Complexity (cyclomatic, McCabe, or whatever metric you like)
- "Boolean complexity" (the number of boolean constants used within code that are not part of initialization)
- The fraction of classes that are immutable
Some might find the notion of measuring lines of code abhorrent. I will argue that it is not the metric that is evil, it is the use of it to rank and rate programmers. The misuse of metrics is all too easy and can lead to poor code. (You get what you measure and reward.)
Why do we not measure these things? (Or any other aspect of code?)
Probably because there is no way to connect these metrics to project cost. In the end, project cost is what matters. Without a translation from lines of code (or any other metric) to cost, the metrics are meaningless. The code may be one class of ten thousand lines, or one hundred classes of one hundred lines each; without a conversion factor, the cost of each design is the same. (And the cost of each design is effectively zero, since we cannot convert design decisions into costs.)
Our current capabilities do not allow us to assign cost to design, or code size, or code complexity. The only costs we can measure are the development costs: number of programmers, time for testing, and number of defects.
One day in the future we will be able to convert complexity to cost. When we do, we will move from craft to engineering.
As much as I would like for software development to be engineering, I consider it a craft.
Engineering is a craft that must work within measurable constraints, and must optimize some measurable attributes. For example, bridges must support a specific, measurable load, and minimize the materials used in construction (again, measurable quantities).
We do not do this for software.
We manage not software but software development. That is, we measure the cost and time of the development effort, but we do not measure the software itself. (The one exception is measuring the quality of the software, but that is a difficult measurement and we usually measure the number and severity of defects, which is a negative measure.)
If we are to engineer software, then we must measure the software. (We can -- and should -- measure the development effort. Those are necessary measurements. But they are not, by themselves, sufficient for engineering.)
What can we measure in software? Here are some suggestions:
- Lines of code
- Number of classes
- Number of methods
- Average size of classes
- Complexity (cyclomatic, McCabe, or whatever metric you like)
- "Boolean complexity" (the number of boolean constants used within code that are not part of initialization)
- The fraction of classes that are immutable
Some might find the notion of measuring lines of code abhorrent. I will argue that it is not the metric that is evil, it is the use of it to rank and rate programmers. The misuse of metrics is all too easy and can lead to poor code. (You get what you measure and reward.)
Why do we not measure these things? (Or any other aspect of code?)
Probably because there is no way to connect these metrics to project cost. In the end, project cost is what matters. Without a translation from lines of code (or any other metric) to cost, the metrics are meaningless. The code may be one class of ten thousand lines, or one hundred classes of one hundred lines each; without a conversion factor, the cost of each design is the same. (And the cost of each design is effectively zero, since we cannot convert design decisions into costs.)
Our current capabilities do not allow us to assign cost to design, or code size, or code complexity. The only costs we can measure are the development costs: number of programmers, time for testing, and number of defects.
One day in the future we will be able to convert complexity to cost. When we do, we will move from craft to engineering.
Labels:
code complexity,
craft,
engineering,
project management
Tuesday, October 11, 2011
SOA is not DOA
SOA (service oriented architecture) is not dead. It is alive and well.
Mobile apps use it. iPhone apps that get data from a server (e-mail or Twitter, for example) use web services -- a service oriented architecture.
SOA was the big thing back in 2006. So why do we not hear about it today?
I suspect it had nothing to do with SOA's marketability.
I suspect that no one talks about SOA because no one makes money from it.
Object oriented programming was an opportunity to make money. Programmers had to learn new techniques and new languages; tool vendors had to provide new compilers, debuggers, and IDEs.
Java was a new programming language. Programmers had to learn it. Vendors provided new compilers and IDEs.
UML was big, for a while. Vendors provided tools; architects, programmers, and analysts learned it.
The "retraining load" for SOA is smaller, limited mostly to the architects of systems. (And there are far fewer architects than programmers or analysts.) SOA has no direct affect on programmers.
With no large-scale training programs for SOA (and no large-scale training budgets for SOA), vendors had no incentive to advertise it. They were better off hawking new versions of compilers.
Thus, SOA quietly faded into the background.
But it's not dead.
Mobile apps use SOA to get work done. iPhones and Android phones talk to servers, using web services. This design is SOA. We may not call it that, but that's what it is.
When the hype of SOA vanished, lots of companies dropped interest in SOA. Now, to move their applications to the mobile world, they will have to learn SOA.
So don't count SOA among the dead.
On the other hand, don't count on it for your profits. You need it, but it is infrastructure, like electricity and running water. I know of few companies that count on those utilities as a competitive advantage.
Mobile apps use it. iPhone apps that get data from a server (e-mail or Twitter, for example) use web services -- a service oriented architecture.
SOA was the big thing back in 2006. So why do we not hear about it today?
I suspect it had nothing to do with SOA's marketability.
I suspect that no one talks about SOA because no one makes money from it.
Object oriented programming was an opportunity to make money. Programmers had to learn new techniques and new languages; tool vendors had to provide new compilers, debuggers, and IDEs.
Java was a new programming language. Programmers had to learn it. Vendors provided new compilers and IDEs.
UML was big, for a while. Vendors provided tools; architects, programmers, and analysts learned it.
The "retraining load" for SOA is smaller, limited mostly to the architects of systems. (And there are far fewer architects than programmers or analysts.) SOA has no direct affect on programmers.
With no large-scale training programs for SOA (and no large-scale training budgets for SOA), vendors had no incentive to advertise it. They were better off hawking new versions of compilers.
Thus, SOA quietly faded into the background.
But it's not dead.
Mobile apps use SOA to get work done. iPhones and Android phones talk to servers, using web services. This design is SOA. We may not call it that, but that's what it is.
When the hype of SOA vanished, lots of companies dropped interest in SOA. Now, to move their applications to the mobile world, they will have to learn SOA.
So don't count SOA among the dead.
On the other hand, don't count on it for your profits. You need it, but it is infrastructure, like electricity and running water. I know of few companies that count on those utilities as a competitive advantage.
Labels:
architecture,
service oriented architecture,
SOA,
web services
Monday, October 10, 2011
Talent is not a commodity
Some companies treat their staff as a commodity. You know the symptoms: rigid job titles, detail (although often inaccurate) job descriptions, and a bureaucracy for hiring people. The underlying idea is that people, like any other commodity, can be hired for specific tasks at specific times. The management-speak for this idea is "just in time provisioning of staff".
Unfortunately for the managers, talented individuals are not stocked on shelves. They must be found and recruited. While companies (hiring companies and staffing companies) have built an infrastructure of resumes and keyword searches, the selection of candidates is lengthy and unpredictable. Hiring a good programmer is different from ordering a box of paper.
The "talent is a commodity" mindset leads to the "exact match" mindset. The "exact match" mindset leads hiring managers (and Human Resource managers) to the conclusion that the only person for the job is the "right fit" with the "complete set of skills". It is an approach that avoids mistakes, turning away candidates for the smallest of reasons. ("We listed eight skills for this position, and you have only seven. Sorry, you're not the person for us!")
Biasing your hiring decisions against mistakes means that you lose out on opportunities. It also means that you delay bringing a person on board. You can wait until you find a person with the exact right skills. Depending on the package (and it's popularity), it may take some time before you find the person.
I once had a recruiter from half-way across the country call me, because my resume listed the package GraphViz. GraphViz generates and manipulates network graphs, and while used by lots of people, it is rarely listed on resumes. Therefore, recruiters cannot find people with the exact match to the desired skills -- the keyword match fails.
Of course, when you bring this person on board, you are under a tight schedule. You need the person to perform immediately. They do their best, but even that may be insufficient to learn the technologies and your current system. (Not to mention the corporate culture.) The approach has a high risk of mistakes (low quality deliverable), slow performance (again, a low quality deliverable), cost overruns from overtime (high expenses), and possibly a late delivery.
Let's consider an alternative sequence.
Instead of looking for an exact match, you find a bright programmer who has the ability to learn the specialized skill. Pay that person for a week (or three) to learn the package. Then have them start on integrating the package into your system.
You should be able to find someone in a few weeks, much less than the six months or more for the exact match. (If you cannot find a bright programmer in a week or two, you have other problems.)
Compromising on specific skills (while keeping excellence in general skills) provides some advantages.
You start earlier, which means you can identify problems earlier.
Your costs may be slightly higher, since you're paying for more time. On the other hand, you may be able to find a person at a lower rate. And even at the higher rate, a few months over a long term of employment is not that significant.
You invest in the person (by paying him to learn something new), and the person will recognize that. (You're hiring a clever person, remember?)
You can consider talent as an "off-the-shelf" commodity, something that can be hired on demand. For commonly used skills, this is a workable model. But for obscure skills, or a long list of skills, the model works poorly. Good managers know how and when to compromise on small objectives to meet the larger goals.
Unfortunately for the managers, talented individuals are not stocked on shelves. They must be found and recruited. While companies (hiring companies and staffing companies) have built an infrastructure of resumes and keyword searches, the selection of candidates is lengthy and unpredictable. Hiring a good programmer is different from ordering a box of paper.
The "talent is a commodity" mindset leads to the "exact match" mindset. The "exact match" mindset leads hiring managers (and Human Resource managers) to the conclusion that the only person for the job is the "right fit" with the "complete set of skills". It is an approach that avoids mistakes, turning away candidates for the smallest of reasons. ("We listed eight skills for this position, and you have only seven. Sorry, you're not the person for us!")
Biasing your hiring decisions against mistakes means that you lose out on opportunities. It also means that you delay bringing a person on board. You can wait until you find a person with the exact right skills. Depending on the package (and it's popularity), it may take some time before you find the person.
It might take you six months -- or longer -- to find an exact match. And you may never find an exact match. Instead, with a deadline looming, you compromise on a candidate that has skills that are "close enough".
Of course, when you bring this person on board, you are under a tight schedule. You need the person to perform immediately. They do their best, but even that may be insufficient to learn the technologies and your current system. (Not to mention the corporate culture.) The approach has a high risk of mistakes (low quality deliverable), slow performance (again, a low quality deliverable), cost overruns from overtime (high expenses), and possibly a late delivery.
Let's consider an alternative sequence.
Instead of looking for an exact match, you find a bright programmer who has the ability to learn the specialized skill. Pay that person for a week (or three) to learn the package. Then have them start on integrating the package into your system.
You should be able to find someone in a few weeks, much less than the six months or more for the exact match. (If you cannot find a bright programmer in a week or two, you have other problems.)
Compromising on specific skills (while keeping excellence in general skills) provides some advantages.
You start earlier, which means you can identify problems earlier.
Your costs may be slightly higher, since you're paying for more time. On the other hand, you may be able to find a person at a lower rate. And even at the higher rate, a few months over a long term of employment is not that significant.
You invest in the person (by paying him to learn something new), and the person will recognize that. (You're hiring a clever person, remember?)
You can consider talent as an "off-the-shelf" commodity, something that can be hired on demand. For commonly used skills, this is a workable model. But for obscure skills, or a long list of skills, the model works poorly. Good managers know how and when to compromise on small objectives to meet the larger goals.
Saturday, October 8, 2011
What Microsoft's past can tell us about Windows 8
Microsoft Windows 8 changes a lot of assumptions about Windows. It especially affects developers. The familiar Windows API has been deprecated, and Microsoft now offers WinRT (the "Windows Runtime").
What will it be like? What will it offer?
I have a guess.
This is a guess. As such, I could be right or wrong. I have seen none of Microsoft's announcements or documentation for Windows 8, so I might be wrong at this very moment.
Microsoft is good at building better versions of competitors' products.
Let's look at Microsoft products and see how they compare to the competition.
MS-DOS was a bigger, better CP/M.
Windows was a better (although perhaps not bigger) version of IBM's OS/2 Presentation Manager.
Windows 3.1 included a better version of Novell's Netware.
Word was a bigger version of Wordstar and Wordperfect.
Excel was a bigger, better version of Lotus 1-2-3.
Visual Studio was a bigger, better version of Borland's TurboPascal IDE.
C# was a better version of Java.
Microsoft is not so much an innovator as it is an "improver", one who refines an idea.
What will it be like? What will it offer?
I have a guess.
This is a guess. As such, I could be right or wrong. I have seen none of Microsoft's announcements or documentation for Windows 8, so I might be wrong at this very moment.
Microsoft is good at building better versions of competitors' products.
Let's look at Microsoft products and see how they compare to the competition.
MS-DOS was a bigger, better CP/M.
Windows was a better (although perhaps not bigger) version of IBM's OS/2 Presentation Manager.
Windows 3.1 included a better version of Novell's Netware.
Word was a bigger version of Wordstar and Wordperfect.
Excel was a bigger, better version of Lotus 1-2-3.
Visual Studio was a bigger, better version of Borland's TurboPascal IDE.
C# was a better version of Java.
Microsoft is not so much an innovator as it is an "improver", one who refines an idea.
It might just be that Windows 8 will be not an Innovative New Thing, but instead a Bigger Better Version of Some Existing Thing -- and not a bigger, better version of Windows 7, but a bigger, better version of someone else's operating system.
That operating system may just be Unix, or Linux, or NetBSD.
Microsoft can't simply take the code to Linux and "improve" it into WinRT; doing so would violate the Linux license.
But Microsoft has an agreement with Novell (yes, the same Novell that saw it's Netware product killed by Windows 3.1), and Novell has the copyright to Unix. That may give Microsoft a way to use Unix code.
It just may be that Microsoft's WinRT will be very Unix-like, with a kernel and a separate graphics layer, modules and drivers, and an efficient set of system calls. WinRT may be nothing more than a bigger, better version of Unix.
And that may be a good thing.
Tuesday, October 4, 2011
What have you done for you lately?
The cynical question that one asks of another is "What have you done for me lately?".
A better question to ask of oneself is: "What have I done for me lately?".
We should each be learning new things: new technologies, new languages, new business skills... something.
Companies provide employees with performance reviews (or assessments, or evaluations, or some such thing). One item (often given a low weighting factor) is "training". (Personally, I think it should be considered "education"... but that is another issue.)
I like to give myself an assessment each year, and look at education. I expect to learn something new each year.
I start each year with a list of cool stuff that sounds interesting. The items could be new programming languages, or a different technologies, or interpersonal skills. I refer to that list during the year; sometimes I add or change things. (I don't hold myself to the original list -- technology changes to quickly.)
Employers and companies all too often take little action to help their employees improve. That doesn't mean that you get a free pass -- it means that you must be proactive. Don't wait for someone to tell you to learn a new skill; by then it will be too late. Look around, pick some skills, and start learning.
What are you doing for you?
Sunday, October 2, 2011
The end of the PC age?
Are we approaching the end of the PC age? It seems odd to see the end of the age, as I was there at the beginning. The idea that a technology should have a shorter lifespan than a human leads one to various contemplations.
But perhaps the idea is not so strange. Other technologies have come and gone: videotape recorders, hand-held calculators, Firewire, and the space shuttle come to mind. (And by "gone", I mean "used in limited quantities, if at all". The space shuttles are gone; VCRs and calculators are still in use but considered curiosities.
Personal computers are still around, of course. People use them in the office and at home. They are entrenched in the office, and I think that they will remain present for at least a decade. Home use, in contrast, will decline quickly, with personal computers replaced by game consoles, cell phones, and tablets. Computing will remain in the office and in the home.
But here's the thing: People do not think of cell phones and tablets as personal computers.
Cell phones and tablets are cool computing devices, but they are not "personal computers". Even Macbooks and iMac computers are not "personal computers". The term "PC" was strongly associated with IBM (with "clone" for other brands) and Microsoft DOS (and later, Windows).
People have come to associate the term "personal computer" with a desktop or laptop computer of a certain size and weight, of any brand, running Microsoft Windows. Computing devices in other forms, or running other operating systems, are not "personal computers": they are something else: a Macbook, a cell phone, an iPad... something. But not a PC.
Microsoft's Windows 8 offers a very different experience from the "classic Windows". I believe that this difference is enough to break the idea of a "personal computer". That is, a tablet running Windows 8 will be considered a "tablet" and not a "PC". New desktop computers with touchscreens will be considered computers, but probably not "PCs". Only the older computers with keyboards and mice (and no touchscreen) will be considered "personal computers".
Microsoft has the opportunity to brand these new touchscreen computers. I suggest that they take advantage of this opportunity. I recognize that their track record with product names has been poor ("Zune", "Kin", and the ever-awful "Bob") but they must do something.
The term "personal computer" is becoming a reference to a legacy device, to our father's computing equipment. Personal computers were once the Cool New Thing, but no more.
But perhaps the idea is not so strange. Other technologies have come and gone: videotape recorders, hand-held calculators, Firewire, and the space shuttle come to mind. (And by "gone", I mean "used in limited quantities, if at all". The space shuttles are gone; VCRs and calculators are still in use but considered curiosities.
Personal computers are still around, of course. People use them in the office and at home. They are entrenched in the office, and I think that they will remain present for at least a decade. Home use, in contrast, will decline quickly, with personal computers replaced by game consoles, cell phones, and tablets. Computing will remain in the office and in the home.
But here's the thing: People do not think of cell phones and tablets as personal computers.
Cell phones and tablets are cool computing devices, but they are not "personal computers". Even Macbooks and iMac computers are not "personal computers". The term "PC" was strongly associated with IBM (with "clone" for other brands) and Microsoft DOS (and later, Windows).
People have come to associate the term "personal computer" with a desktop or laptop computer of a certain size and weight, of any brand, running Microsoft Windows. Computing devices in other forms, or running other operating systems, are not "personal computers": they are something else: a Macbook, a cell phone, an iPad... something. But not a PC.
Microsoft's Windows 8 offers a very different experience from the "classic Windows". I believe that this difference is enough to break the idea of a "personal computer". That is, a tablet running Windows 8 will be considered a "tablet" and not a "PC". New desktop computers with touchscreens will be considered computers, but probably not "PCs". Only the older computers with keyboards and mice (and no touchscreen) will be considered "personal computers".
Microsoft has the opportunity to brand these new touchscreen computers. I suggest that they take advantage of this opportunity. I recognize that their track record with product names has been poor ("Zune", "Kin", and the ever-awful "Bob") but they must do something.
The term "personal computer" is becoming a reference to a legacy device, to our father's computing equipment. Personal computers were once the Cool New Thing, but no more.
Friday, September 30, 2011
Functional programming pays off
I've been using the "immutable objects" technique from functional programming. This technique starts with object-oriented programming and constrains objects to immutables, objects that cannot change once constructed. This is quite different from the traditional object-oriented programming approach, in which objects can change their state.
With the immutable object style, objects can be constructed but not modified. This is constraining, yet it is also freeing. Once we have an object, I know that I can re-arrange code and not affect the object -- it is immutable, after all. Re-arranging code lets us simplify the higher-level functions and make the code more readable.
The new technique was not "natural" -- no change in programming techniques ever is -- and it took some effort to change. I started at the bottom of the object hierarchy, which let me modify objects with no dependencies. This approach was important. I could change the bottom-most objects and not affect the other (high-level) objects. It let me introduce the concept gradually, and with minimal ripples.
Over the past few weeks, I have extended the immutable style upwards, and now most classes are immutable. This change has already yielded results; we can debug problems faster and change the system design quickly, and each time we know that we are introducing no new defects. (A comprehensive set of tests helps, too.)
We now have code that can be read by our (non-programming) subject matter experts, code that works, and code that can be easily changed. This is a win for all of us.
I expect immutable object programming to become popular, and soon.
With the immutable object style, objects can be constructed but not modified. This is constraining, yet it is also freeing. Once we have an object, I know that I can re-arrange code and not affect the object -- it is immutable, after all. Re-arranging code lets us simplify the higher-level functions and make the code more readable.
The new technique was not "natural" -- no change in programming techniques ever is -- and it took some effort to change. I started at the bottom of the object hierarchy, which let me modify objects with no dependencies. This approach was important. I could change the bottom-most objects and not affect the other (high-level) objects. It let me introduce the concept gradually, and with minimal ripples.
Over the past few weeks, I have extended the immutable style upwards, and now most classes are immutable. This change has already yielded results; we can debug problems faster and change the system design quickly, and each time we know that we are introducing no new defects. (A comprehensive set of tests helps, too.)
We now have code that can be read by our (non-programming) subject matter experts, code that works, and code that can be easily changed. This is a win for all of us.
I expect immutable object programming to become popular, and soon.
Monday, September 26, 2011
The end of the word processor?
Do word processors have a future?
Desktop PCs (and laptop PCs) were built for composers, people who assembled information. The earliest applications for PCs were word processors (Wordstar, Wordperfect, and others), spreadsheets (Visicalc, Supercalc, Lotus 1-2-3), databases (dBase II), and development tools (BASIC, assembly language, Pascal, C, and others).
Cell phones and tablets are built for consumers, people who view information but do not create information. The early apps were games, instant messaging, music players, cameras, Twitter, and perhaps the New York Times.
One does not compose documents on a cell phone. Nor does one compose spreadsheets, or presentations. It is a platform for consuming content, not creating it.
Which leads to the question: in the new world of phones and tablets, how do we create content?
I think that the answer is: we create it on a different platform.
Perhaps we create it on servers. Or perhaps we create it on the few desktops that still exist. (I suspect that desktop PCs will not go away, but like mainframes will exist in small numbers.)
But not everyone will be composing. I expect that only a small percentage of computer users will be creating content. The typical home user will not need Microsoft Word. One does not need a word processor to run a household today -- and I suspect one never really needed one.
This shrinks the market for word processors and spreadsheets. More specifically, this shrinks the market for Microsoft Office. And I think Microsoft knows this. It knows that the home market for Microsoft Office is evaporating. (The business side is a different case. More on that in another post.)
Word processors replaced typewriters. They were a convenient way of collecting text for printing on paper. With the advent of the internet and e-mail, we eliminated the step of printing text on paper. That leaves us with the idea of collecting text and sharing it with people -- and one does not need a word processor for that.
Desktop PCs (and laptop PCs) were built for composers, people who assembled information. The earliest applications for PCs were word processors (Wordstar, Wordperfect, and others), spreadsheets (Visicalc, Supercalc, Lotus 1-2-3), databases (dBase II), and development tools (BASIC, assembly language, Pascal, C, and others).
Cell phones and tablets are built for consumers, people who view information but do not create information. The early apps were games, instant messaging, music players, cameras, Twitter, and perhaps the New York Times.
One does not compose documents on a cell phone. Nor does one compose spreadsheets, or presentations. It is a platform for consuming content, not creating it.
Which leads to the question: in the new world of phones and tablets, how do we create content?
I think that the answer is: we create it on a different platform.
Perhaps we create it on servers. Or perhaps we create it on the few desktops that still exist. (I suspect that desktop PCs will not go away, but like mainframes will exist in small numbers.)
But not everyone will be composing. I expect that only a small percentage of computer users will be creating content. The typical home user will not need Microsoft Word. One does not need a word processor to run a household today -- and I suspect one never really needed one.
This shrinks the market for word processors and spreadsheets. More specifically, this shrinks the market for Microsoft Office. And I think Microsoft knows this. It knows that the home market for Microsoft Office is evaporating. (The business side is a different case. More on that in another post.)
Word processors replaced typewriters. They were a convenient way of collecting text for printing on paper. With the advent of the internet and e-mail, we eliminated the step of printing text on paper. That leaves us with the idea of collecting text and sharing it with people -- and one does not need a word processor for that.
Sunday, September 25, 2011
Does Windows 8 make Windows our new legacy?
Windows 8 is a big change from all previous versions of Windows. Not only is the user interface significantly different (tiled Windows are a thing of the past), the underlying APIs are different. The Win32 and Win APIs that we have used for years (decades?) have been replaced by WinRT.
Windows 8 does not completely drop support for WinAPI and tiled windows. They are supported in the Desktop app, an app available under Metro and WinAPI.
But the position of WinAPI has moved from premiere to second. It is now "the old thing". And by extension, applications that use WinAPI are now "the old thing". In other words, legacy applications.
If the platform on which we build our application becomes obsolete, so do our applications. It matters not what we think of our applications. We may love them or despise them, respect them for their profitability or fear them for the maintenance headaches. But the ability to support them rests on the platform; without it, our applications vanish.
Microsoft has signalled that they are removing the WinAPI platform. To live, our applications must move to the new platform.
Windows 8 does not completely drop support for WinAPI and tiled windows. They are supported in the Desktop app, an app available under Metro and WinAPI.
But the position of WinAPI has moved from premiere to second. It is now "the old thing". And by extension, applications that use WinAPI are now "the old thing". In other words, legacy applications.
If the platform on which we build our application becomes obsolete, so do our applications. It matters not what we think of our applications. We may love them or despise them, respect them for their profitability or fear them for the maintenance headaches. But the ability to support them rests on the platform; without it, our applications vanish.
Microsoft has signalled that they are removing the WinAPI platform. To live, our applications must move to the new platform.
Monday, September 19, 2011
Microsoft closes the door to Windows
Microsoft's primer for Metro yields lots of interesting information. What I find most interesting is that Microsoft is "closing the door" for Windows development. And by that, I mean that Microsoft is exerting more control over the market. I see two limitations:
Gating apps for distribution is possibly a mistake. The great strength of Windows was its open market. Anyone could develop and sell (or give away) a Windows application. The entry price was not free (one needed the tools and the knowledge) but the market was open to all comers. Microsoft made some attempt to ensure quality, with such things as "Ready for Win95" requirements, but always let anyone develop and distribute software.
The approval process is limited to the Metro side of Windows 8; I believe that classic Windows apps which run under the Desktop app will follow the old model. Yet I believe that the Desktop app is a transition device, akin to the "DOS box" that was in Windows 95 and later versions. The Desktop becomes the place for "legacy" apps, apps with a limited life span. Metro is the future, and Microsoft will build tools and support for the brave new Metro world.
Enterprises and developers will be able to create and install their own apps without going through the Windows App Store. I'm guessing that Microsoft will limit this capability, requiring developers to sign their apps and allowing them to install only their own apps -- they won't be able to install an app from a different developer. (I'm guessing the same will hold for enterprises, too.)
So as a Windows developer, I can build and test my app on my hardware -- but I cannot give it (or sell it) to you, unless I go through the Windows App Store.
And I'm guessing that the ability to install self-signed apps will come for a price: a developer license, or maybe a signing license. (Probably multiple levels of license, from developer to enterprise, with different price tags.)
Microsoft also limits the languages for Metro apps to a few: C++, C#, Visual Basic, and Javascript. This is an interesting development, given the genesis of .NET. The announcement of .NET emphasized the plurality of languages (some from Microsoft and some from third parties) which contrasted .NET against Java's "one language for everyone" design.
With these two changes, it becomes clear that Linux is the only open platform, allowing choice for development languages and an open market. Microsoft joins Apple and Google/Android in the closed and controlled market for apps.
If we define the beginning of the PC revolution as the IMSAI 8800 (from 1977) with its totally open architecture, we can mark today as another step towards the end of that revolution.
- Microsoft becomes a gatekeeper, evaluating and approving those apps that become available in the Windows App Store
- Microsoft limits the languages for development
Gating apps for distribution is possibly a mistake. The great strength of Windows was its open market. Anyone could develop and sell (or give away) a Windows application. The entry price was not free (one needed the tools and the knowledge) but the market was open to all comers. Microsoft made some attempt to ensure quality, with such things as "Ready for Win95" requirements, but always let anyone develop and distribute software.
The approval process is limited to the Metro side of Windows 8; I believe that classic Windows apps which run under the Desktop app will follow the old model. Yet I believe that the Desktop app is a transition device, akin to the "DOS box" that was in Windows 95 and later versions. The Desktop becomes the place for "legacy" apps, apps with a limited life span. Metro is the future, and Microsoft will build tools and support for the brave new Metro world.
Enterprises and developers will be able to create and install their own apps without going through the Windows App Store. I'm guessing that Microsoft will limit this capability, requiring developers to sign their apps and allowing them to install only their own apps -- they won't be able to install an app from a different developer. (I'm guessing the same will hold for enterprises, too.)
So as a Windows developer, I can build and test my app on my hardware -- but I cannot give it (or sell it) to you, unless I go through the Windows App Store.
And I'm guessing that the ability to install self-signed apps will come for a price: a developer license, or maybe a signing license. (Probably multiple levels of license, from developer to enterprise, with different price tags.)
Microsoft also limits the languages for Metro apps to a few: C++, C#, Visual Basic, and Javascript. This is an interesting development, given the genesis of .NET. The announcement of .NET emphasized the plurality of languages (some from Microsoft and some from third parties) which contrasted .NET against Java's "one language for everyone" design.
With these two changes, it becomes clear that Linux is the only open platform, allowing choice for development languages and an open market. Microsoft joins Apple and Google/Android in the closed and controlled market for apps.
If we define the beginning of the PC revolution as the IMSAI 8800 (from 1977) with its totally open architecture, we can mark today as another step towards the end of that revolution.
Thursday, September 15, 2011
Business must be simple enough for a cell phone
Is your business simple enough to serve customers on a cell phone? If not, you may want to think about changing your business.
In the "good old days", companies had large mainframe computers and those computers allowed for complex businesses. Computers allowed banks to pay interest based on the day of deposit and the day of withdrawal. they allowed telephone companies to set rates beyond the simple commercial and residential categories. The best example is airlines, with their complex systems for reservations and fares.
Personal computers, despite their name, were never simple. The applications ranged from simple to complex, and even the set-up of the equipment required some expertise. The typical PC application is a complex beast. Even the simple Windows applications of Notepad and Calculator have nooks and crannies, little features that are unobvious.
The World Wide Web changed the idea of complexity. Web pages can be simple or complicated, but a business on web pages can afford only so much complexity. When customers use "self-service" web sites, complexity is your enemy. A customer will accept only so much complexity; after that they call your help desk.
Cell phones and tablet computers have set a new bar for simplicity in applications. An app on a cell phone must be simple; the size of the display constrains the complexity. Tablet computers are following the cell phone model, not the desktop PC model.
I''m convinced that Microsoft recognizes this; their new Windows 8 and its "Metro" interface is geared for simpler applications.
Consumers have come to expect a simple experience. Most Microsoft applications are complicated, from their file formats to the installation routines to their "average" use. This complexity is a failure of the operating system that was going to make computers "easy to use" and "intuitive". If you are following the Microsoft model of applications (multiple windows, lots of dialogs), you will have a difficult time living in the brave new world of cell phone and tablet apps.
But it's not just your software. If your software is complicated, it probably means that your business is complicated. (For example, airline reservations.) Complex businesses require complex software, and complex software does not fit in the cell phone interface.
Smart phones have been out for several years. If you cannot offer your business to customers on a cell phone (because it's too complicated), you may want to think about your business. The cell phone and the tablet are the new location for business. If you cannot fit there, you will be unable to survive.
In the "good old days", companies had large mainframe computers and those computers allowed for complex businesses. Computers allowed banks to pay interest based on the day of deposit and the day of withdrawal. they allowed telephone companies to set rates beyond the simple commercial and residential categories. The best example is airlines, with their complex systems for reservations and fares.
Personal computers, despite their name, were never simple. The applications ranged from simple to complex, and even the set-up of the equipment required some expertise. The typical PC application is a complex beast. Even the simple Windows applications of Notepad and Calculator have nooks and crannies, little features that are unobvious.
The World Wide Web changed the idea of complexity. Web pages can be simple or complicated, but a business on web pages can afford only so much complexity. When customers use "self-service" web sites, complexity is your enemy. A customer will accept only so much complexity; after that they call your help desk.
Cell phones and tablet computers have set a new bar for simplicity in applications. An app on a cell phone must be simple; the size of the display constrains the complexity. Tablet computers are following the cell phone model, not the desktop PC model.
I''m convinced that Microsoft recognizes this; their new Windows 8 and its "Metro" interface is geared for simpler applications.
Consumers have come to expect a simple experience. Most Microsoft applications are complicated, from their file formats to the installation routines to their "average" use. This complexity is a failure of the operating system that was going to make computers "easy to use" and "intuitive". If you are following the Microsoft model of applications (multiple windows, lots of dialogs), you will have a difficult time living in the brave new world of cell phone and tablet apps.
But it's not just your software. If your software is complicated, it probably means that your business is complicated. (For example, airline reservations.) Complex businesses require complex software, and complex software does not fit in the cell phone interface.
Smart phones have been out for several years. If you cannot offer your business to customers on a cell phone (because it's too complicated), you may want to think about your business. The cell phone and the tablet are the new location for business. If you cannot fit there, you will be unable to survive.
Tuesday, September 13, 2011
Microsoft Metro is not their PS/2 -- its worse
Microsoft is releasing more information about Windows 8 and its "Metro" interface. Metro is very different from the "traditional" Windows interface. It is more like a cell phone or tablet interface, displaying one application at a time (no tiled windows) and built to handle touch screen gestures.
IBM introduced the PS/2 (and OS/2) in 1987, six years after the introduction of the original IBM PC. The PS/2 was a step up from the PC/XT/AT product line, addressing multiple problems with that line. The PS/2 had a different floppy drive, a smarter buss, better video, a smaller keyboard connector, and a built-in mouse port, to name the big improvements.
The problem was, no one followed IBM. This was in part due to IBM's licensing arrangements. The original PC was "open", at least to hardware. It took a while for Compaq and other manufacturers to develop a compatible BIOS which allowed them to build computers with sufficient compatible behavior to run popular software.
Instead of following IBM, people followed Compaq, which introduced their "Deskpro" line. The Deskpros were fast PCs that used the traditional connectors and busses, with faster processors and more memory. Compaq also beat IBM with the first 80386-based PC, the "Deskpro 386".
IBM found that it was not the market leader. (You're not a leader when you say, "Let's go this way" and no one follows.)
Now Microsoft is introducing Windows 8 and Metro. Are they pulling a "PS/2"? The answer seems to be "no".
Metro is a big change. Metro is certainly different from the Windows interface introduced in Windows 1.0 and enhanced in Windows 95. The shift from tiled windows to single-app visibility is a large one. This change is as big as the PC-to-PS/2 change.
The Microsoft App Store is not a bold new paradigm. An app store is, again, following the market. Yet it also alienates the software distribution channel. When software is sold on-line and not in boxes, the retailers (Best Buy, etc.) have nothing to sell.
The market will not revert to older designs. The market rejected IBM's PS/2 and selected Compaq (with its old-style designs) as their new leader. With Windows 8 and Metro, Microsoft is validating the existing markets for iOS and Android apps. The only old-style provider would be Linux, which offers multi-tiled desktop applications, and I see few people (and even fewer companies) abandoning Windows for Linux.
The PC did, eventually, mutate to something quite similar to the PS/2 design. The new keyboard and mouse connectors were adopted. The PS/2 Micro-channel bus was not adopted, but the PCI bus was. The VGA standard was adopted and quickly surpassed with Super VGA, WVGA, XVGA, and UVGA and a host of others. Everyone used the new 1.44MB 3.5" floppy disk standard. The only thing lacking from the PS/2 was the orange switch for power.
Will something similar happen with operating systems and software? I think that the answer is "yes". Will Microsoft be the company to lead us? I'm not so sure. Apple and Android have a commanding presence.
So I don't know that "Metro" is Microsoft's "PS/2 event". But I do think that it is a significant change.
IBM introduced the PS/2 (and OS/2) in 1987, six years after the introduction of the original IBM PC. The PS/2 was a step up from the PC/XT/AT product line, addressing multiple problems with that line. The PS/2 had a different floppy drive, a smarter buss, better video, a smaller keyboard connector, and a built-in mouse port, to name the big improvements.
The problem was, no one followed IBM. This was in part due to IBM's licensing arrangements. The original PC was "open", at least to hardware. It took a while for Compaq and other manufacturers to develop a compatible BIOS which allowed them to build computers with sufficient compatible behavior to run popular software.
Instead of following IBM, people followed Compaq, which introduced their "Deskpro" line. The Deskpros were fast PCs that used the traditional connectors and busses, with faster processors and more memory. Compaq also beat IBM with the first 80386-based PC, the "Deskpro 386".
IBM found that it was not the market leader. (You're not a leader when you say, "Let's go this way" and no one follows.)
Now Microsoft is introducing Windows 8 and Metro. Are they pulling a "PS/2"? The answer seems to be "no".
Metro is a big change. Metro is certainly different from the Windows interface introduced in Windows 1.0 and enhanced in Windows 95. The shift from tiled windows to single-app visibility is a large one. This change is as big as the PC-to-PS/2 change.
Microsoft is committing to Metro. Windows 8 and Metro include a "Legacy Windows" app, in which one can run old-style Windows applications. The Microsoft propaganda says that legacy apps will be supported in Windows 8, and I am sure that they will. But let's not fool ourselves: The new stuff will be in Metro.
Metro is not a bold new paradigm. Microsoft is shifting to the new paradigm introduced by iOS and Android. In this way, Microsoft is not leading the market, but following it.
The market will not revert to older designs. The market rejected IBM's PS/2 and selected Compaq (with its old-style designs) as their new leader. With Windows 8 and Metro, Microsoft is validating the existing markets for iOS and Android apps. The only old-style provider would be Linux, which offers multi-tiled desktop applications, and I see few people (and even fewer companies) abandoning Windows for Linux.
The PC did, eventually, mutate to something quite similar to the PS/2 design. The new keyboard and mouse connectors were adopted. The PS/2 Micro-channel bus was not adopted, but the PCI bus was. The VGA standard was adopted and quickly surpassed with Super VGA, WVGA, XVGA, and UVGA and a host of others. Everyone used the new 1.44MB 3.5" floppy disk standard. The only thing lacking from the PS/2 was the orange switch for power.
Will something similar happen with operating systems and software? I think that the answer is "yes". Will Microsoft be the company to lead us? I'm not so sure. Apple and Android have a commanding presence.
So I don't know that "Metro" is Microsoft's "PS/2 event". But I do think that it is a significant change.
Monday, September 12, 2011
Ugly business makes for ugly code
A number of associates bemoan the state of their code. More specifically, they complain of the complexity of the code, and of the time and effort required to implement changes and new features.
The complexity of code is the sum of two elements: the complexity of the business and the complexity of the source code. Complexity of the business is decided by the management of the business; complexity of the code is a bit more subtle.
A program can be written in multiple ways; some simple and some complex. In my experience, non-trivial programs can be complicated or simple (some might use the word "elegant"), but the time for solutions is different. Complicated solutions can be written quickly; elegant solutions require more time. A "quick and dirty" solution is just what is sounds like: hastily assembled code that gets the job done but is a jumble of logic.
Simple code, code that is easy to maintain, is harder to write than jumbled code. Simple code is the result of jumbled code that is refined and improved through multiple iterations of design. I know of no programmers that write simple, elegant code from the get-go. They all start with messy code, confirm that they have the correct behavior, and then refactor the code. Those refinements take time and skill.
The one truth about simple code is this:
If you want simple code, you have to pay for it.
But there is a limit to the simplicity of your code. That limit is defined by your business. Your business has operational requirements; your programs must meet those requirements.
The one truth about complexity is this:
No amount of programming will simplify complex business rules. If you have a complex business, you will have complex programs.
How you choose to run your business is, well, your business. But consider this: complex programs cost more to maintain than simple programs. The business benefit of complex rules must yield enough additional revenue to cover the cost of the additional maintenance; if not, you are spending money for a net negative return on investment.
The complexity of code is the sum of two elements: the complexity of the business and the complexity of the source code. Complexity of the business is decided by the management of the business; complexity of the code is a bit more subtle.
A program can be written in multiple ways; some simple and some complex. In my experience, non-trivial programs can be complicated or simple (some might use the word "elegant"), but the time for solutions is different. Complicated solutions can be written quickly; elegant solutions require more time. A "quick and dirty" solution is just what is sounds like: hastily assembled code that gets the job done but is a jumble of logic.
Simple code, code that is easy to maintain, is harder to write than jumbled code. Simple code is the result of jumbled code that is refined and improved through multiple iterations of design. I know of no programmers that write simple, elegant code from the get-go. They all start with messy code, confirm that they have the correct behavior, and then refactor the code. Those refinements take time and skill.
The one truth about simple code is this:
If you want simple code, you have to pay for it.
But there is a limit to the simplicity of your code. That limit is defined by your business. Your business has operational requirements; your programs must meet those requirements.
The one truth about complexity is this:
No amount of programming will simplify complex business rules. If you have a complex business, you will have complex programs.
How you choose to run your business is, well, your business. But consider this: complex programs cost more to maintain than simple programs. The business benefit of complex rules must yield enough additional revenue to cover the cost of the additional maintenance; if not, you are spending money for a net negative return on investment.
Monday, September 5, 2011
Simple is the new black
If you haven't noticed, we've had a paradigm shift.
We've changed our expectations of computer programs, from comprehensive and complex to simple and east-to-use.
In the old model, we had large, complicated programs that were accompanied by manuals (installation manuals, reference manuals, operations manuals) and how-to books ("Learn Microsoft Word in 21 Days!"). Recent versions of software had no printed manual but large help files and comprehensive web pages.
The value of the software was based on the weight of the box and accompanying materials. Small, light packages were valued less than heavy packages. An application with lots of documentation was worth more than an application with little documentation. Enterprise applications were worth even more, since they required not only manuals but dedicated administrators and specialists to teach the regular users.
Smartphones and tablets changed that model. They defined and validated a different method to evaluate software.
In the new model, we use software without referring to manuals. In the new model, we expect software to work for us. In the new model, we value results.
This bodes ill for programs such as Microsoft Word and Microsoft Excel. These programs are complex. They offer many features, and lots of control over the document (or spreadsheet), but they require a "ramp-up" time. We're no longer willing to pay that time. Software has been swallowed into the age of "immediate self-gratification".
It also bodes ill for enterprise software. Well, enterprise software that provides no value to the enterprise. Large corporations may put up with complicated software, but it must prove itself. We no longer send people to week-long classes for the use of software. The software has to work immediately, and people have to be productive immediately.
Complex and comprehensive is not sufficient. Software has to offer value, and it must be immediate and recognized. It must provide value to users (and the enterprise). And it must do it from "minute one", from when we start using the software. Forcing people to learn the habits and quirks of the software is "out"; making people effective immediately is "in".
We've changed our expectations of computer programs, from comprehensive and complex to simple and east-to-use.
In the old model, we had large, complicated programs that were accompanied by manuals (installation manuals, reference manuals, operations manuals) and how-to books ("Learn Microsoft Word in 21 Days!"). Recent versions of software had no printed manual but large help files and comprehensive web pages.
The value of the software was based on the weight of the box and accompanying materials. Small, light packages were valued less than heavy packages. An application with lots of documentation was worth more than an application with little documentation. Enterprise applications were worth even more, since they required not only manuals but dedicated administrators and specialists to teach the regular users.
Smartphones and tablets changed that model. They defined and validated a different method to evaluate software.
In the new model, we use software without referring to manuals. In the new model, we expect software to work for us. In the new model, we value results.
This bodes ill for programs such as Microsoft Word and Microsoft Excel. These programs are complex. They offer many features, and lots of control over the document (or spreadsheet), but they require a "ramp-up" time. We're no longer willing to pay that time. Software has been swallowed into the age of "immediate self-gratification".
It also bodes ill for enterprise software. Well, enterprise software that provides no value to the enterprise. Large corporations may put up with complicated software, but it must prove itself. We no longer send people to week-long classes for the use of software. The software has to work immediately, and people have to be productive immediately.
Complex and comprehensive is not sufficient. Software has to offer value, and it must be immediate and recognized. It must provide value to users (and the enterprise). And it must do it from "minute one", from when we start using the software. Forcing people to learn the habits and quirks of the software is "out"; making people effective immediately is "in".
Thursday, September 1, 2011
Microsoft's big opportunity
Apple has had lots of success with iPhones, iPods, and iPads. They have redefined the computer for the consumer market. Microsoft has not kept up with Apple, and one can say that Apple has surpassed Microsoft with its products.
They key word in that paragraph is "consumer". Apple has excellent products for consumers, people who use computers and digital goods. But Apple is not so good at infrastructure (consider the XServer) and composition tools. Apple has had to go to outside companies for tools to create digital media: Adobe for PDF and Photoshop, Microsoft for its Office suite.
Microsoft has a long history of products that let people create stuff. The earliest was their BASIC interpreter, followed quickly by compilers for COBOL, FORTRAN, and C. Some were developed in-house, others were acquired, but Microsoft was there and ready to supply the builders. Microsoft also has Visual Studio, one of the best environments for developing programs. Beyond programming tools, Microsoft offers its Office suite that allows people to create documents, spreadsheets, presentations, databases, and project plans.
Apple's products are clearly for consumers. The iPad is a wonderful device, but it is not meant for the creator of goods. It's on-screen keyboard is insufficient for true development work. (Yes, you can get a bluetooth keyboard. But at that point, why not a laptop?)
Apple has shown that it is not interested in the market for builders (at least not beyond OSX and iOS platforms). This is Microsoft's opportunity: They can be the supplier of premier development tools for Windows and OSX.
They need not stop at Apple platforms. The Linux tools for development are good, but they are not as good as Microsoft's tools. Here too, Microsoft has the opportunity to become the leader in composition/creation tools. (I'm including compilers in this list, not simply documents and spreadsheets.)
To win these markets, Microsoft must move away from the "Windows and only Windows" mindset. It is an attitude that forces them to build everything: the operating system, the composition tools, and the consumer products. And they haven't done such a great job at all of that.
There is more to it that simply building tools for other platforms. Microsoft has alienated the users of those other platforms, and must reconcile those bad feelings. Microsoft also has licensing issues to work out -- the Linux crowd expects software for free -- but I think their recent "Express" products may lead the way. The solution is within Microsoft's reach.
They key word in that paragraph is "consumer". Apple has excellent products for consumers, people who use computers and digital goods. But Apple is not so good at infrastructure (consider the XServer) and composition tools. Apple has had to go to outside companies for tools to create digital media: Adobe for PDF and Photoshop, Microsoft for its Office suite.
Microsoft has a long history of products that let people create stuff. The earliest was their BASIC interpreter, followed quickly by compilers for COBOL, FORTRAN, and C. Some were developed in-house, others were acquired, but Microsoft was there and ready to supply the builders. Microsoft also has Visual Studio, one of the best environments for developing programs. Beyond programming tools, Microsoft offers its Office suite that allows people to create documents, spreadsheets, presentations, databases, and project plans.
Apple's products are clearly for consumers. The iPad is a wonderful device, but it is not meant for the creator of goods. It's on-screen keyboard is insufficient for true development work. (Yes, you can get a bluetooth keyboard. But at that point, why not a laptop?)
Apple has shown that it is not interested in the market for builders (at least not beyond OSX and iOS platforms). This is Microsoft's opportunity: They can be the supplier of premier development tools for Windows and OSX.
They need not stop at Apple platforms. The Linux tools for development are good, but they are not as good as Microsoft's tools. Here too, Microsoft has the opportunity to become the leader in composition/creation tools. (I'm including compilers in this list, not simply documents and spreadsheets.)
To win these markets, Microsoft must move away from the "Windows and only Windows" mindset. It is an attitude that forces them to build everything: the operating system, the composition tools, and the consumer products. And they haven't done such a great job at all of that.
There is more to it that simply building tools for other platforms. Microsoft has alienated the users of those other platforms, and must reconcile those bad feelings. Microsoft also has licensing issues to work out -- the Linux crowd expects software for free -- but I think their recent "Express" products may lead the way. The solution is within Microsoft's reach.
Monday, August 29, 2011
Getting old
Predicting the future of technology is difficult. Some technologies endure, others disappear quickly. Linux is twenty years old. Windows XP is ten, although bits of the Windows code base go back to Windows NT (and possibly Windows 3.1 or even MS-DOS). Yet the "CueCat", Microsoft "Bob", and IBM "TopView" all vanished in short order.
One aspect of technology is easy to predict: our systems will be a mix of old and new tech.
Technology has always been a mix of new and old. In the Eldar Days of PCs (the era of the Apple II, CP/M, and companies named Cromemco and Northstar), computer systems were blends of technology. Computers were powered by microprocessors that were low-end even in the day, to reduce the cost. We stored data on floppy disks (technology from the minicomputer age), on cassette tapes (a new twist on old, cheap hardware), and some folks used paper tape (tech from the 1960s).
Terminals were scrounged keyboards and displays, often the display was a television with 40 characters of uppercase characters per line. The better-off could afford systems with built-in equipment like the Radio Shack TRS-80 and the Commodore PET.
The software was a mix. Most systems had some form of Microsoft BASIC built into ROM; advanced systems allowed for the operating system to be loaded from disk. CP/M was popular and new to the microcomputer era, but it borrowed from DEC's operating systems. Apple had their DOS, Heathkit had their own HDOS but also supported CP/M and the UCSD p-System.
We all used BASIC. We knew it was from the timesharing era, despite Microsoft's extensions. When not using BASIC, we used assembly language and we knew it was ancient. A few brave souls ventured into Digital Research's CBASIC, FORTH, or a version of Tiny C.
The original IBM PC was a mix of off-the-shelf and new equipment. The keyboard came from the earlier IBM System/23, albeit with different labels on the keys. The motherboard was new. The operating system (MS-DOS) was new, but a clone of CP/M.
Our current modern equipment uses mixed-age technologies. Modern PCs have just now lost the old PS/2 keyboard and mouse ports (dating back to the 1987 IBM PS/2) and the serial and parallel ports (dating back to the original 1981 IBM PC with the same connectors and earlier with larger, more rugged connectors).
Apple has done a good job at moving technology forward. The iPhone and iPad devices have little in the way of legacy hardware and software. Not bad for a company whose first serious product had a built-in keyboard but needed a television to display 24 rows of 40 uppercase characters.
Sunday, August 28, 2011
A successful project requires diverse skills
My introduction to the Pentaho suite (and specifically the "Spoon" and "Kettle" tools) gave me some insight into necessary skills for a successful project.
For a successful development project, you need several, diverse skills. All are important. Without any of them, the project will suffer and possibly fail.
Second is knowledge of the business resources. What data is available? How is the data named? Where is it stored? Most importantly, what does it mean? As with widgets, some data domains have "too much" data, such that a single person can never be sure that they have the right data. (Large organizations tend to have analysts dedicated to the storage and retrieval of data.) Beyond data, you need awareness of servers and network resources.
Third is knowledge of the business environment. What external requirements (regulations) affect your solution? What are the self-imposed requirements (policies)? What about authentication and data retention?
Fourth is the ability to interact with business folks and understand the requirements for the specific task or tasks. (What, exactly, should be on the report? What is its purpose? Who will be using it? What decisions do they make?)
Finally, we have programming skills. The notions of iteration, selection, and computation are needed to build a workable solution. You can lump in the skills of iterative development (extreme programming, agile development, or any set you like). Once you understand the tool and the data available, you must compose a solution. It's one thing to know the tools and the problem domain, it's another to assemble a solution. It is these skills that make one a programmer.
Some organizations break the development of a system into multiple tasks, assigning different pieces to different individuals. This division of labor allows for specialization and (so managers hope) efficiency. Yet it also opens the organization to other problems: you need an architect overseeing the entire system to ensure that the pieces fit, and it is easy for an organization to fall into political bickering over the different subtasks.
Regardless of your approach (one person or a team of people), you need all of these skills.
Thursday, August 25, 2011
Farewell Steve Jobs
Jobs was the last of the original titans of microcomputers. There were many folks in the early days, but only a few known by name. Steve Wozniak, Gary Kildall, and Bill Gates were the others.
Those titans (the known-by-name and the unsung) made the microcomputer revolution possible. Companies like Apple, Microsoft, Digital Research, Radio Shack, Commodore, Heathkit, and even TI and Sinclair all made personal computing possible in the late 1970s and early 1980s.
There are few titans today. Yes, we have Steve Ballmer at Microsoft and Larry Ellison at Oracle, but they are business folks, not technologists. The open source community has its set (Linus Torvalds, Eric S Raymond, and others) but the world is different. The later titans are smaller, building on the shoulders of their earlier kin.
Steve Jobs and Apple taught us some valuable lessons:
Design counts: The design of a product is important. People will pay for well-designed products (and avoid other products).
Those titans (the known-by-name and the unsung) made the microcomputer revolution possible. Companies like Apple, Microsoft, Digital Research, Radio Shack, Commodore, Heathkit, and even TI and Sinclair all made personal computing possible in the late 1970s and early 1980s.
There are few titans today. Yes, we have Steve Ballmer at Microsoft and Larry Ellison at Oracle, but they are business folks, not technologists. The open source community has its set (Linus Torvalds, Eric S Raymond, and others) but the world is different. The later titans are smaller, building on the shoulders of their earlier kin.
Steve Jobs and Apple taught us some valuable lessons:
Design counts: The design of a product is important. People will pay for well-designed products (and avoid other products).
Quality counts: People will pay for quality. Apple products have been priced higher than corresponding PC products, and people buy them.
Try things and learn from mistakes: Apple tried many things. There were several incarnations of the iPod before it become popular.
One can enter an established market: Apple entered the market with its iPod well after MP3 players were established and "the norm". It also entered the market with its iPhone.
One can create new markets: The iPad was a new thing, something previously unseen. Apple made the market for it.
Drop technology when it doesn't help: Apple products have mutated over the years, losing features that most folks would say are required for backwards compatibility. AppleTalk, the PS/2-style keyboard and mouse ports, RS-232 serial ports, Centronics parallel printer ports, even Firewire have all been eliminated from the Apple line.
Use marketing to your advantage: Apple uses marketing strategically, coordinating it with products. It also uses it as a weapon, raising Apple above the level of the average technology companies.
Replace your own products: Apple constantly introduces new products to replace existing Apple products. They don't wait for someone else to challenge them; they constantly raise the bar.
Focus on the customer: Apple has focussed on the customer and their experience with the product. Their customer experience beats any product, commercial or open source.
Apple must now live without Steve Jobs. And not only Apple, but all of us. Steve Jobs' influence was not merely within Apple but extended to the entire computing world.
Monday, August 22, 2011
Immutable Object Programming
I've been working with "Immutable Object Programming" and becoming more impressed with it.
Immutable Object Programming is object-oriented programming with objects that, once created, do not change. It is a technique used in functional programming, and I borrowed it as a transition from traditional object-oriented programming to functional programming.
Immutable Object Programming (IOP) enforces a discipline on the programmer, much like structured programming enforced a discipline on programmers. With IOP, one must assemble all components of an object prior to its creation. The approach of traditional object-oriented programming allows for objects to change state, and this is not possible with IOP. With IOP, you do not want an object to change state. Instead, you want a new object, often an object of a different type. Thus, when you have new information, you construct a new object from the old, adding the information and creating a new object of a similar but different type. (For example, a Sale object and a payment are used to construct a CompletedSale object.)
IOP yields programs that have lots of classes and the logic is mostly linear. The majority of statements are assignment statements -- often creating an object, and the logic for iteration and decisions are contained within the constructor code.
As a programmer, I have a good feeling about the programs I write using IOP techniques. It is a feeling of certainty, a feeling that the code is correct. It is a good feeling.
I experienced this feeling once before, when I learned structured programming techniques. At the time, my programs were muddled and difficult to follow. With structured programming techniques, my programs became understandable.
I have not had that feeling since. I did not experience it with object-oriented programming; OOP was difficult to learn and not clarifying.
You can use immutable object programming immediately; it requires no new compiler or language. It requires a certain level of discipline, and a willingness to change. I use it with the C# language; it works with any modern language. (For this conversation, C++ is omitted from the set of modern languages.) I started with the bottom layer of our objects, the ones that are self-contained. Once the "elementary" objects were made immutable, I moved up a layer to the next set of objects. Within a few weeks I was at the highest level of objects in our code.
Immutable Object Programming is object-oriented programming with objects that, once created, do not change. It is a technique used in functional programming, and I borrowed it as a transition from traditional object-oriented programming to functional programming.
Immutable Object Programming (IOP) enforces a discipline on the programmer, much like structured programming enforced a discipline on programmers. With IOP, one must assemble all components of an object prior to its creation. The approach of traditional object-oriented programming allows for objects to change state, and this is not possible with IOP. With IOP, you do not want an object to change state. Instead, you want a new object, often an object of a different type. Thus, when you have new information, you construct a new object from the old, adding the information and creating a new object of a similar but different type. (For example, a Sale object and a payment are used to construct a CompletedSale object.)
IOP yields programs that have lots of classes and the logic is mostly linear. The majority of statements are assignment statements -- often creating an object, and the logic for iteration and decisions are contained within the constructor code.
As a programmer, I have a good feeling about the programs I write using IOP techniques. It is a feeling of certainty, a feeling that the code is correct. It is a good feeling.
I experienced this feeling once before, when I learned structured programming techniques. At the time, my programs were muddled and difficult to follow. With structured programming techniques, my programs became understandable.
I have not had that feeling since. I did not experience it with object-oriented programming; OOP was difficult to learn and not clarifying.
You can use immutable object programming immediately; it requires no new compiler or language. It requires a certain level of discipline, and a willingness to change. I use it with the C# language; it works with any modern language. (For this conversation, C++ is omitted from the set of modern languages.) I started with the bottom layer of our objects, the ones that are self-contained. Once the "elementary" objects were made immutable, I moved up a layer to the next set of objects. Within a few weeks I was at the highest level of objects in our code.
Monday, August 15, 2011
Iterating over a set is better than looping
When coding, I find it better to use the "foreach" iterator that the "for" loop.
The two are similar but not identical. The "for" operation is a loop for a fixed number of times; the "foreach" operation is applied to a set and repeats the contained code once for each member of the set. A "for" loop will often be used to achieve the same goal, but there is no guarantee that the number of iterations will match the size of the set. A "foreach" iteration is guaranteed to match the set.
For example, I was reviewing code with a colleague today. The code was:
for (int i = 0; i < max_size; i++)
{
for (int j = 0; j < struct_size; j++, i++)
{
item[i] = // some value
}
}
This is an unusual construct. It differs from the normal nested loop:
- The inner loop increments both index values (i and j)
- The inner loop contains assignments based on index i, but not j
What's happening here is that the j loop is used as a counter, and the index i is used as an index into the entire structure.
This is a fragile construct; the value max_size must contain the size of the entire structure "item". Normally the max_size would contain the number of larger elements, each element containing a struct_size number of items. Changing the size of item requires understanding (and remembering) this bit of code, since it must change (or at least the initialization of max_size must change).
Changing this code to "foreach" iterators would make it more robust. It also requires us to think about the structures involved. In the previous code, all we know is that we have "max_size" number of items. If the set is truly a linear set, then a single "foreach" is enough to initialize them. (So is a single "for" loop.) If the set actually consists of a set of items (a set within a set), then we have code that looks like:
foreach (Item_set i in larger_set)
{
foreach (Item j in i)
{
j = // some value
}
}
Of course, once you make this transformation, you often want to change the variable names. The names "i" and "j" are useful for indices, but with iterators we can use names that represent the actual structures:
foreach (Item_set item_set in larger_set)
{
foreach (Item item in item_set)
{
item = // some value
}
}
Changing from "for" to "foreach" forces us to think about the true structure of our data and align our code with that structure. It encourages us to pick meaningful names for our iteration operations. Finally, it gives us code that is more robust and resilient to change.
I think that this is a win all the way around.
Sunday, August 14, 2011
The web and recruiting
When times are hard, and lots of people are seeking employment, companies have an easy time of hiring (for those companies that are even hiring). When times are good, more people are employed and fewer people seek employment. Companies wishing to hire folks have a more difficult time, since there are fewer people "in the market". The "market" of available people swings from a "buyer's market" to a "seller's market" as people are more and less available.
The traditional method of finding people is through staffing agencies and recruiting firms. I suspect that the internet and the web will change this. Companies and candidates can use new means of advertising and broadcasting to make information available, either about open positions or available talent. From Facebook and LinkedIn to contributions to open source projects, candidates make a wealth of information available.
Group A companies will use the web to identify candidates and reach out to them. They will use the web as a means of finding the right people. Group A companies use the web-enabled information.
Group B companies will use the web in a passive role. They will post their open positions and wait for candidates to apply. (And they will probably demand that the candidate submit their resume in Word format only. They won't accept a PDF or ODT file.) They may use web sites to check on candidates, probably looking for pictures of the person dressed as a pirate. I expect that they will do little to review contributions to open source projects.
When it comes to finding talent, the group A companies have the advantage. They can evaluate candidates early on and make offers to the people who have the skills that they seek. The group B companies have a harder time, as they have to filter through the applications and review candidates.
I suspect that, between the two strategies, the group A companies will be more effective and have the smaller effort. It's more work for each candidate, but less work overall. The group B companies will spend less time on each applicant, but more time overall. (And by spending less time on each candidate, they make less effective decisions.)
I also suspect that both groups will think that their strategy is the lesser effort and more effective.
So which company do you want to be? And which company would you rather work with?
Subscribe to:
Posts (Atom)