Sunday, January 2, 2011
Predictions for 2011
The turning of the year provides a time to pause, look back, and look ahead. Looking ahead can be the most fun, since we can make predictions.
Here are my predictions for computing in the coming year:
Tech that is no longer new
Virtualization will drop from the radar. Virtualization for servers is no longer exciting -- some might say that is is "old hat". Virtualization for the desktop is "not quite fully baked". I expect modest interest in virtualization, driven by promises of cost reductions, but no major announcements.
Social networks in the enterprise are also "not quite fully baked", but here the problem is with the enterprise and its ability to use them. Enterprises are built on command-and-control models and don't expect individuals to comment on each other's projects. When enterprises shift to results-oriented models, enterprise social networks will take off. But this is a change in psychology, not technology.
Multiple Ecosystems
The technologies associated with programming are a large set, and not a single bunch. Programmers seem to enjoy "language wars" (C# or C++? Java or Visual Basic? Python or Ruby?) and the heated debates continue in 2011. But beyond languages, the core technologies are bunched: Microsoft has its "stack" of Windows, .NET, C#, and SQL Server; Oracle with its purchase of Sun has Solaris, JVM, Java, Oracle DB, and MySQL; and so forth.
We'll continue to see the fracturing of the development world. The big camps are Microsoft, Oracle, Apple, Google, and open source. Each has their own technology set and the tools cross camps poorly, and I expect the different ecosystems will continue to diverge. Since each technology set is too large for a single person to learn, individuals must pick a camp as their primary skill area and forgo other camps. Look to see experts in one environment (or possibly two) but not all.
Companies will find that they are consolidating their systems into one of the big five ecosystems. They will build new systems in the new technology, and convert their old systems into the new technology. Microsoft shops will convert Java systems to C#, Oracle shops will convert Visual Basic apps to Java, and everyone will want to convert their old C++ systems to something else. (Interestingly, C++ was the one technology that spanned all camps, and it is being abandoned or at least deprecated by employers and practitioners.)
Microsoft will keep .NET and C#, and continue to "netify" its offerings. Learning from Apple, it will shift away from web applications in a browser to internet applications that run locally and connect through the network. Look for more "native apps" and fewer "web apps".
Apple will continue to thrive in the consumer space, with new versions of iPads, iPods, and iPhones. The big hole in their tech stack is the development platform, which compiles to the bare processor and not to a virtual machine. Microsoft uses .NET, Oracle uses JVM, and the open source favorites Perl, Python, and Ruby also use interpreters and virtual machines.
Virtual processors provide three advantages: 1) superior development tools, 2) improved security, and 3) independence from physical processors. Apple needs this independence; look for a new development platform for all of their devices (iPhone, iPad, iPod, and Mac). This new platform will require a relearning of development techniques, and may possibly use a new programming language.
Google has always lived in the net and does not need to "netify" its offerings. Unlike Microsoft, it will stay in the web world (that is, inside a browser). I expect modest improvements to things such as Google Search, Google Docs, and Google Chrome, and major improvements to Google cloud services such as the Google App Engine.
The open source "rebel alliance" will continue to be a gadfly with lots of followers but little commercial clout. Linux will be useful in the data center but it will not take over the desktop. Perl will continue its slow decline; Python, Ruby, and PHP will gain. Open source products such as Open Office may get a boost from the current difficult economic times.
Staffing
Companies will have a difficult time finding the right people. They will find lots of the "not quite right" people. When they find "the right person", that right person may want to work from home. Companies will have four options:
1) Adjust policies and allow employees to work from home or alternate locations. This will require revision to management practices, since one must evaluate on delivered goods and not on appearance and arrival time.
2) Keep the traditional management policies and practices and accept "not quite right" folks for the job.
3) Expand the role of off-site contractors. Companies that use off-site contractors but insist that employees show up to the office every day and attend Weekly Status Meetings of Doom will be in a difficult situation: How to justify the "work in the office every day" policy when contractors are not in the office?
4) Defer hiring.
How companies deal with staffing in an up market, after so many years of down markets, will be interesting and possibly entertaining.
New tech
Cloud computing will receive modest interest from established shops, but it will take a while longer for those shops to figure out how to use it. More interest will come from startups. The benefits of cloud computing, much like the PC revolution of the early 1980s, will be in new applications, not in improving existing applications.
We will see an interest in functional programming languages. I dislike the term "functional" since all programming languages let you define functions and are functional in the sense that they perform, but the language geeks have their reason for the term and we're stuck with it. The bottom line: Languages such as Haskell, Erlang, and even Microsoft's F# will tick up on the radar, modestly. The lead geeks are looking into these languages, just as they looked into C++ in the mid 1980s.
The cloud suppliers will be interested in functional programming. Functional languages are a better fit in the cloud, where processes can be shuffled from one processor to another. C# and Java can be used for cloud applications, but such efforts require a lot more discipline and careful planning.
Just as C++ was a big jump up from C and Pascal, functional languages are a big jump up from C++, Java, and C#. Programming in a functional language (Haskell, Erlang, or F#) requires a lot of up-front analysis and thought.
The transition from C to C++ was driven by Windows and its event-driven model. The transition from object-oriented to functional programming will be driven by the cloud and its new model. The shift to functional languages will take time, possibly decades. Complicating the transition will be the poor state of object-oriented programming. Functional programming assumes good-to-excellent knowledge of object-oriented programming, and a lot of shops use object-oriented languages but not rigorously. These shops will have to improve their skills in object-oriented programming before attempting the move to functional programming.
These are my predictions for the coming year. I've left out quite a few technologies, including:
Ruby on Rails
Silverlight
Windows Phone 7
NoSQL databases
Perl 6
Microsoft's platforms (WPF, WWF, WCF, and whatever they have introduced since I started writing this post)
Google's "Go" language
Android phones
Salesforce.com's cloud platform
There is a lot of technology out there! Too much to cover in a single post. I've picked those items that I think will be the big shakers. Let's see how well I do! We can check in twelve months.
Sunday, December 19, 2010
Are you ready for another step?
The step to high level languages was perhaps the most traumatic, since it was the first. It saw the deprecation of assembly language and fine control of the program in exchange for the ability to write programs with little concern for memory layout and register assignment.
The step to structured programming was also controversial. The programming police took away our GOTO statement, that workhorse of flow control and limited us to sequential statements, if/then/else blocks, and while loops.
When we moved from structured programming to object-oriented programming, we had to learn a whole new "paradigm" (and how to spell and pronounce the word "paradigm"). We learned to organize our code into classes, to encapsulate our data, to build a class hierarchy, and to polymorphize our programs.
Funny thing about each of these steps: they built on the previous steps. Structured programming assumed the existence of high level languages. There was no (noticeable) movement for structured assembly language. Object-oriented programming assumed the tenets of structured programming. There was no GOTO in object-oriented programming languages, except for the C++ "goto" keyword which was offered up on the altar of backwards compatibility and only then with restrictions.
And now we are about to move from object-oriented programming to functional programming. Once again, the "old school" programmers will gnash their teeth and complain. (Of course, it will be a different bunch than the teeth-gnashers of the golden age of assembly language. The modern teeth-gnashers will be those who advocated for object-oriented programming two decades ago.) And once again we will move to the new thing and accept the new paradigm of programming.
Yet those shops which attempt to move up to the step of functional programming will find a challenge.
Here's the problem:
Many shops, and I suspect most shops, use only a small fraction of object-oriented programming concepts in their code. They have not learned object-oriented programming.
Big shops (and medium-size shops, and small shops) have adopted object-oriented languages (C++, Java, C#) but not adopted the mindset of object-oriented programming. Much of the code is procedural code, sliced into classes and methods. It is structured code, but not really object-oriented code.
Most programmers are writing not clean, object-oriented code, but FORTRAN. The old saying "I can write FORTRAN in any language" is true because the object-oriented languages allowed for the procedural constructs. (Mind you, the code is FORTRAN-77 and not FORTRAN IV or FORTRAN II, but FORTRAN and procedural it is.)
This model breaks when we move to functional languages. I suspect that one can write object-oriented code (and not pure functional code) in functional languages, but you cannot write procedural code in functional languages, just as you cannot write non-structured code in object-oriented languages. The syntax does not allow for it.
The shops that move to functional languages will find that their programmers have a very hard transition. They have been writing procedural code, and that technique will no longer work.
What is an IT shop to do? My recommendations:
First, develop better skills at object-oriented programming. This requires two levels of learning: one for individuals, and the second for the organization. Individuals must learn to use the full range of object-oriented programs. Organizations must learn to encourage object-oriented programming and must actively discourage the older, structured programming techniques.
Second, start developing skills in functional programming. If using a "plain" object-oriented programming language such as C++, build discipline in techniques of functional programming. My favorite is what I call "constructor-oriented" programming, in which you use immutable objects. All member variables are set in the constructor and do not allow methods to change any values. This exercise gives you experience with some of the notions in functional programming.
The transition to functional languages will occur, just as the transition to object-oriented languages occurred. The question is, do you want your shop to move to functional programming on your schedule, or on someone else's schedule? For if you make no plans and take no action, it will occur as the external world dictates.
Friday, December 17, 2010
Think like a programmer
This past week, I've been working on a legacy application. It consists of 20,000 lines of C++ code, written and modified by a number of people of varying talent. The code works -- mostly -- and it certainly compiles and runs. But the code is poorly organized, uses unclear names for functions and variables, and relies on the C preprocessor. Identifying the source of problems is a challenge. As I work on the code, I look at the coding style and the architecture. Both make me wince. I think to myself: "Programmers -- real programmers --don't think this way."
It's an arrogant position, and I'm not proud of the arrogance. The code was put together by people, some who are not professional programmers, doing their best. But the thought crosses my mind.
Cut to my other activity this week: reading about the Haskell programming language and the concepts behind functional programming. These concepts are as different from object-oriented programming as object-oriented programming was to procedural programming. In functional programming, the design is much more rigorous. The central organizational concepts are recursion and sets. The programs are hard to read -- for one who is accustomed to object-oriented code or the procedural-laden code that exists in many shops. Yet despite the strangeness, functional programming has elegance and a feeling of durability. As I look at programs written in Haskell, I think to myself: "Programmers -- real programmers -- don't think this way."
This is not the arrogant position I have with poorly structure code, but one of humility and frustration. I am disappointed with the programmers and the state of the craft. I covet the attributes of functional programming. I want programs to be elegant, readable, and reliable. I want programmers to have a high degree of rigor in their thinking.
In short, I want programmers to think not like programmers, but like mathematicians.
Monday, December 13, 2010
The importance of being significant
If I tell you that I have 100 dollars in my pocket, you will assume that I have *about* 100 dollars. I may have exactly 100 dollars, or I may have 95 or 102 or maybe even 120. My answer provides information to a nice round number, which is convenient for our conversation. (If I actually have $190 something more than $150, I should say "about 200 dollars", since that is the closer round number.) The phrase "100 dollars" is precise to the first digit (the '1' in '100') but not down to the last zero.
On the other hand, if I tell you that I have 100 dollars and 12 cents, then you can assume that I have indeed $100.12 and not something like $120 or $95. By specifying the 12 cents, I have provided an answer with more significant figures; five in the latter case, one in the former.
The number of significant figures is, well, significant. Or at least important. It's a factor in calculations that must be included for reliable results. There are rules for performing arithmetic with numbers, and significant figures tell us when we must stop adding digits of precision.
For example, the hypothetical town of Springfield has a population of 15,000. That number has two significant figures. If one person moves into Springfield, is the population now 15,001? The arithmetic we learned in elementary school says that it is, but that math assumes that the 15,000 population figure is precise to all places (five significant figures). In the real world, town populations are estimates (mostly because they change, but slowly enough that the estimate is still usable). The 15,000 figure is precise to two figures; it has limited precision.
When performing calculations with estimates or other numbers with limited precision, the rule is: you cannot increase precision. You have to keep to the original level of precision, or lose precision. (You cannot make numbers more precise than the original measurements, because that is creating fictional information.)
With a town estimate of 15,000 (two "sig-figs"), adding a person to the town yields an estimate of... 15,000. It's as if I told you that I had $100 in my pocket, and then I found a quarter and tucked it into my pocket. How much do I now have in my pocket? It's not $100.25, because that would increase the number of significant figures from one to five, and you cannot increase precision. We have to stick with one digit of precision, so I *still* have to report $100 in my pocket, despite my windfall.
In the engineering world, respecting the precision of the initial estimates is important for accurate estimates later in the calculations.
I haven't seen this concept carried over to the computer programming world. In programming languages, we have the ability to read and write integers and floating point numbers (and other data types). With integers, we often have the ability to specify the number of character positions for the number; for floating point, we can specify the number of digits and the number of decimal places. But the number of decimal places is not the same as the number of significant figures.
In my experience, I have seen no programming language or class library address this concept. (Perhaps someone has, if so please send info.) Knuth covers the concept in great detail in "The Art of Computer Programming" and explains how precision can be lost during computations. (If you want a scary read, go read that section of his work.)
There may be several reasons for our avoidance of significant figures:
It takes effort to compute. Using significant figures in calculations requires that we drag around additional information and perform additional adjustments on the raw results. This is a problem of computational power.
It requires additional I/O There is more effort to specify the significant figures on input (and to a lesser extent, output) This is an argument of language specification, numeric representation, and input/output capacity.
It reduces the image of authority associated with the computer In Western culture, the computer holds a place of authority of information. Studies have shown that people believe the data on computer printouts more readily data on than hand-written documents. This is an issue of psychology.
Some domains don't need it The banking industry, for example, uses numbers that are precise to a fixed decimal place. When you ask a bank for your balance, it responds with a number precise to the penny, not "about $100". This is in issue of the domain.
My thinking is that all of these arguments made sense in their day, but should be re-examined. We have the computational power and the parsing capabilities for accepting, tracking, and using significant figure information. While banking may be immune to significant figures (and perhaps that is only the accounting side of banking), many other domains need to track the precision of their data.
As for the psychological argument, there is no amount of technology, hardware, or language features that will change our thinking. It is up to us to think about our thinking and change it for the better.
Sunday, December 12, 2010
Simple or complex
We can add new features by adding a new layer onto an existing system, or expanding an existing layer within a system. Modifying an existing system can be difficult; adding a new layer can be fast and cheap. For example, the original Microsoft Windows was a DOS program that ran on PCs. Morphing DOS into Windows would have been a large effort (not just for development but also for sales and support to users who at the time were not convinced of the whole GUI idea) and a separate layer was the more effective path for Microsoft.
But adding layers is not without cost. The layers may not always mesh, with portions of lower layers bleeding through the upper layers. Each layer adds complexity to the system. Add enough layers, and the complexity starts to affect performance and the ability to make other changes.
The opposite of "complex" is "simple"; the opposite of "complexify" (if I may coin the word) is "simplify". But the two actions do not have equivalent effort. Where adding complexity is fast and cheap, simplifying a system is hard. One can add new features to a system; if users don't want them, they can ignore them. One has a harder time removing features from a system; if users want them they cannot ignore that the features are gone.
Complexity is not limited to PCs. Consumer goods, such as radios and televisions, were at one point complex devices. Radios had tubes that had to be replaced. TVs had tubes also, and lots of knobs for things like "horizontal control", "color", "hue", and "fine tuning". But those exposed elements of radio and TV internals were not benefits and not part of the desired product; they were artifacts of utility. We needed them to make the device work and give us our programming. They disappeared as soon as technically possible.
Automobiles had their share of complexity, with things like a "choke" and a pedal for the transmission. Some features have been removed, and others have been added. Cars are gaining complexity in the form of bluetooth interfaces to cell phones and built-in GPS systems.
Software is not immune to the effect of layers and complexity. Microsoft Windows was one example, but most systems expand through added layers. The trick to managing software is to manage not just the growth of features but to manage the reduction in complexity. Microsoft eventually merged Windows and DOS and Windows became an operating system in its own right. Microsoft continues to revise Windows, but they do it in both directions: they add features and expand capabilities with new layers, but they also remove complexity and the equivalent of the "fine tuning" knob.
Google's Cr-48 laptop is a step in simplifying the PC. It removes lots of knobs (no local programs, and even no Caps Lock key) and pushes work onto the internet. I predict that this will be a big seller, with simplicity being the sales driver.
Friday, December 10, 2010
Goodbye, Capslock!
Thursday, December 9, 2010
WebMatrix rides again -- but does it matter?
Microsoft has re-introduced a new version of its "WebMatrix" package for entry-level web developers. Not merely an incremental upgrade, the new WebMatrix is a totally new product with the old branding.
WebMatrix is its own thing, living in the space below Visual Studio (even the "Express" version) and doing its own thing. Yet it still uses .NET and C#. It has a neat little syntax, with things like:
value = @Request["username"];
It can use the auto-typing constructs of the modern C# language, and you can declare variables with "var" and let the compiler figure out the type. Microsoft has taken the unusual step of supporting an alien technology (PHP) in the package. They've worked hard on this product.
The real question is: does this new incarnation of WebMatrix have a future? It's a neat tool for web development. Some might say that it is what ASP.NET should have been. If ASP was "version 1", and ASP.NET was "version 2", then this is the famous "version 3" with which Microsoft succeeds.
But the time for web development has come and gone. WebMatrix is aimed at hobbyists, people working with a lighter tool than even the free Express version of Visual Studio. I don't know how many people are in that situation.
The WebMatrix product fills in a box on Microsoft's "we have solutions for all markets" diagram, but I have to wonder how many people live in that particular corner. WebMatrix is for hobbyists, and hobbyists have little money. The money is clearly in the enterprise packages, and enterprises won't be using WebMatrix. If Microsoft wants to grow an "ecosystem" of developers, their tools for hobbyists will have to offer something more than web apps. Hobbyists have moved on to mobile apps, mainly the iPhone and Android. The idea of WebMatrix strikes me as too little, too late.