A recent visit to the dentist got me observing and thinking. My thought was "Wow, look at all the stuff!"
The difference between the practice of dentistry and the practice of programming can be measured by the difference in stuff.
Compared to a developer (programmer, geek, whatever title you want), a dentist is burdened with a large quantity of stuff, much of it expensive.
First, the dentist (and this was a single practitioner, not a collection of dentists) has an office. That is, a physical space in a building. Not a randomly-assigned space as in a co-working office, but a permanently assigned space. A dentist needs a permanently assigned space, because the practice of dentistry requires a bunch of things.
Those things include an operating room (one can argue that this is an office within the office) with the specialized dentist chair, the specialized lamp, X-ray emitter, tools, tray for tools, mobile stand to hold the tray for the tools, sink and counter-top, hose with suction, water squirter, and other items I cannot readily identify.
The larger office has a receptionist area with receptionists (two of them!), patient files, folders (to hold the files), cabinets (to hold the folders), computers, printers, phones, chairs and desks, and general office supplies. It also has a waiting room with chairs, tables, lamps, potted plants, a television, magazines, wastebaskets, and Muzak.
Programming, on the other hand, needs the following items: a laptop computer with a certain amount of processing power and storage, an internet connection, and the rest of the internet. And a place to sit with a connection for power. Maybe a cell phone.
Now, the "rest of the internet" can hold a lot of stuff. Probably more than the few items in the dentist office. And even if we limit that set to the things that are needed by a programmer (editor, compiler, a few other tools, Twitter and Skype, and a browser), the number of items for each practice may be about the same.
But the programmer, in my mind (and I will admit that I am biased), has the more convenient set of stuff. It all fits in the laptop, and can be packed up and moved at a moment's notice. And programmers do not need permanent offices.
I suspect that we will achieve the "officeless office" before we achieve the "paperless office", and the move to the officeless office will occur in professions. Certain professions (probably the newer ones) will move to the officeless office. Brand-new professions may start that way. Some professions may lag, and some may never move out of their permanent offices.
Thursday, April 26, 2012
Sunday, April 22, 2012
The big bucket at the center of all things
A lot of organizations have a central database.
The database is the equivalent of a large bucket into which all information is poured. The database is the center of the processing universe for these companies, with application programs relegated to the role of satellites orbiting the database.
The problem with this approach is that the applications are tied to the database schema. When you design your applications and tie them into the central database, you can easily bind them to the schema of the database.
The result is a large collection of applications that all depend on the schema of the database. If you change the schema, you run the risk of breaking some or all of your applications. At minimum you must recompile and redeploy your applications; it may be necessary to redesign some of them.
Notice that this approach scales poorly. As you create new applications, your ability to change the database schema declines. (More specifically, the cost of changing the database schema increases.)
You can make some types of changes to the schema without affecting applications. You can add columns to tables, and you can add tables and views. You can add new entities without affecting existing applications, since they will not be using the new entities. But you cannot rename a table or column, or change the parameters to a stored procedure, without breaking the applications that use those elements. Your ability to change the schema depends on your knowledge of specific dependencies of applications on the database.
Cloud computing may help this problem. Not because cloud computing has scalable processing or scalable database storage. Not because cloud computing uses virtualized servers. And not because cloud computing has neat brand names like "Elastic Cloud" and "Azure".
Cloud computing helps the "database at the center of the universe" by changing the way people think of systems. With cloud computing, designers think in terms of services rather than physical entities. Instead of thinking of a single processor, cloud designers think of processing farms. Instead of thinking of a web server, designers think of web services. Instead of thinking of files, cloud designers think of message queues.
Cloud computing can help solve the problem of the central database by getting people to think of the database as data provided by services, not data defined by a schema. By thinking in terms of data services, application designers then build their applications to consume services. A service layer can map the exposed service to the private database schema. When the schema changes, the service layer can absorb the changes and the applications can remain unchanged.
Some changes to the database schema may bleed through the service layer. Some changes are too large to be absorbed. For those cases, the challenge becomes identifying the applications that use specific data services, a task that I think will be easier than identifying applications that use specific database tables.
The database is the equivalent of a large bucket into which all information is poured. The database is the center of the processing universe for these companies, with application programs relegated to the role of satellites orbiting the database.
The problem with this approach is that the applications are tied to the database schema. When you design your applications and tie them into the central database, you can easily bind them to the schema of the database.
The result is a large collection of applications that all depend on the schema of the database. If you change the schema, you run the risk of breaking some or all of your applications. At minimum you must recompile and redeploy your applications; it may be necessary to redesign some of them.
Notice that this approach scales poorly. As you create new applications, your ability to change the database schema declines. (More specifically, the cost of changing the database schema increases.)
You can make some types of changes to the schema without affecting applications. You can add columns to tables, and you can add tables and views. You can add new entities without affecting existing applications, since they will not be using the new entities. But you cannot rename a table or column, or change the parameters to a stored procedure, without breaking the applications that use those elements. Your ability to change the schema depends on your knowledge of specific dependencies of applications on the database.
Cloud computing may help this problem. Not because cloud computing has scalable processing or scalable database storage. Not because cloud computing uses virtualized servers. And not because cloud computing has neat brand names like "Elastic Cloud" and "Azure".
Cloud computing helps the "database at the center of the universe" by changing the way people think of systems. With cloud computing, designers think in terms of services rather than physical entities. Instead of thinking of a single processor, cloud designers think of processing farms. Instead of thinking of a web server, designers think of web services. Instead of thinking of files, cloud designers think of message queues.
Cloud computing can help solve the problem of the central database by getting people to think of the database as data provided by services, not data defined by a schema. By thinking in terms of data services, application designers then build their applications to consume services. A service layer can map the exposed service to the private database schema. When the schema changes, the service layer can absorb the changes and the applications can remain unchanged.
Some changes to the database schema may bleed through the service layer. Some changes are too large to be absorbed. For those cases, the challenge becomes identifying the applications that use specific data services, a task that I think will be easier than identifying applications that use specific database tables.
Sunday, April 15, 2012
Forced transitions can work
Technology changes over time. Manufacturers introduce new versions of products, and sometimes introduce radically new products. When a manufacturer introduces a radically new product and discontinues the old product, its customers must make a decision: do the move to the new product or do they stay with the old? This is a forced transition, as it is often impractical to stay with the old product. (New copies or licenses are not available, replacement parts are not available, and support is not available.)
Forced transitions can sometimes succeed:
Forced transitions do not always succeed:
Now, Microsoft is looking to transition its desktop to the new model used by tablets and smartphones. (I call it "tap and swipe", since many of the actions are initiated by taps or swipes of the touchscreen.) Microsoft's vision is present in the Windows 8 "Metro" interface. The computing experience is quite different from classic Windows.
Will they succeed?
Forced transitions can sometimes succeed:
- IBM transitioned customers from their early 704 and 1401 processors to the System/360 processors, and later the System/370 processors.
- DEC transitioned customers from the PDP-11 line to the VAX processor line.
- Microsoft transitioned customers from DOS to Windows, then to Windows NT, and then to .NET.
- Apple transitioned customers from the Macintosh computers with Motorola processors to PowerPC processors, then to Intel processors.
- Apple transitioned the Mac operating system from the original version to OSX.
Forced transitions do not always succeed:
- IBM failed to convince customers to move from the IBM PC to the IBM PS/2.
- DEC failed to convince customers to move from the VAX to the Alpha processor.
Now, Microsoft is looking to transition its desktop to the new model used by tablets and smartphones. (I call it "tap and swipe", since many of the actions are initiated by taps or swipes of the touchscreen.) Microsoft's vision is present in the Windows 8 "Metro" interface. The computing experience is quite different from classic Windows.
Will they succeed?
Microsoft has a lot going for it. They are big and have a commanding presence in the software market. Switching from Windows-based products to alternatives on other platforms is expensive, involving the acquisition of the software, conversion of data, and training of users. Specialized software may be unavailable on platforms other than Windows.
Microsoft also has a lot against its success at the transition. Users are familiar with the current Windows interface and the current tools. The Metro UI brings a very different experience to the desktop and to computing (well, it moves Windows into the realm of iPhones and Android tablets). There will be a lot of resistance to change.
I think Microsoft will succeed, because users have no where else to go. When IBM introduced the PS/2, users had the options of buying IBM PC clones -- and they exercised those options. When DEC introduced the Alpha processor, users had the options of moving to workstations from other vendors -- and they did.
The transition to Windows 8 and Metro forces people to adopt the new interface, but they have no option to replace Windows 7. Changing to Mac OSX will lead to a similar GUI change (I expect future versions of OSX to look more and more like iOS). Changing to Linux creates significant challenges for education and software replacements.
I *do* expect that some shops will move away from Windows. If they have no software that is specific to Windows, if their software is readily available on other platforms, they could move to those other platforms. Some will move to Linux and the LibreOffice suite of tools. Others will move to web-based and cloud-based services like Google Docs and Zoho documents. But I expect these to be a small number of customers. The majority of customers will shift, perhaps unwillingly, to Windows 8.
Thursday, April 12, 2012
Scalable computing changes more than you may think
Cloud computing offers scalable processing and scalable data stores, and these are revolutionary changes in computing.
Prior to cloud computing, processing and data storage in a data processing facility were known, fixed entities. You knew how much CPU you had. You knew how much storage you had. (You had paid for them, or leased them, and the invoice clearly listed the provided equipment.)
The earliest mainframe computers were (physically) large, expensive, and slow. Due to their expense, people strove to extract the most value from them. They did this by scheduling jobs in such a way as to keep the processor busy most of the time. The thinking was that an idle processor was an indication of over-paying for computing equipment. (And that thinking was correct. An idle processor was returning no value on the investment.)
Jobs were designed to fit in to the schedule. Rather than assume unlimited processing and storage and designing your system to meet business needs, you assumed a finite subset of available processing and storage and you designed your system to first fit within the constraints of the equipment and then (and only then) meet the business needs. If the true business needs required an application that was larger than the available processing and storage, you adjusted your business to fit within the equipment constraints.
Minicomputers allowed departments (not corporate headquarters) to acquire computing equipment, but again it was a fixed resource. You knew how much equipment you had. You knew the limits. Applications had to fit within resources.
The PC revolutions (first DOS, and later, Windows) distributed equipment to people, but kept the model of fixed-resources. The programs became more numerous but were still constrained to the "minimum system requirements".
In all of these models, we had to design systems to fit within fixed resources. The practice of allocating scarce resources, and the assumption that resources are scarce, has become ingrained in our thinking. So ingrained that no one questions it. Many companies have project planning methods and practices that assume fixed resources.
Cloud computing changes the assumption of fixed resources.
The big advantage of cloud computing is scalability: processing and storage are available on demand. Yes, they cost, but the incremental costs for processors and gigabytes are low.
Cloud computing's scalability works both ways. You can "scale up" during a "rush" period (say, the end-of-year holidays). Once the rush is over, you can scale down, reducing your use of processing and storage, and reducing your cost of equipment.
Previous hardware configurations did not have the ability to scale up and down. Mainframe computers were pretty much fixed, although upgrades were available. But upgrades were large, complex operations, involving the insertion (or replacement) of circuit boards and cables. Vendors made it (relatively) easy to upgrade and (relatively) hard to downgrade. And upgrades required advance planning -- they were certainly not available on demand.
PCs were better at upgrades than mainframes, in that upgrades were cheaper and could generally be completed within a few days.
In the days before cloud computing, that is when you purchased or long-term leased equipment, scaling up meant the acquisition of hardware. Once acquired, it was hard to scale down. (Possible, but definitely not easy.) You could do less with the equipment you had, but you were still paying for the expanded hardware.
Cloud computing, with its abilities to scale up and scale down, and to do so on demand, changes the game. It's no longer necessary to fit within the currently available hardware configuration. If you have a new application and it requires more CPU, you can accommodate it quickly. You do not need to tear apart your server room, install new servers, and upgrade your power distribution units. And if the new application is a bit of a flop, you can discard it and shrink your hardware down to the original configuration.
We've seen small companies cope with overnight success by using cloud computing. (Prior to cloud computing, companies that became wildly successful and outstripped their ability to expand their hardware quickly became unsuccessful, as performance slowed and customers were unable to complete transactions.)
The mindset of slow, steady, predictable growth can be replaced with one of growth as customers demand. The new mindset can even accept reductions in demand because costs have shifted from the "fixed" column to the "variable" column.
More than that, the mindset of "design to fit within known hardware" can be replaced with "design to meet business needs". This may be a difficult change. The first step is to acknowledge the assumption.
Prior to cloud computing, processing and data storage in a data processing facility were known, fixed entities. You knew how much CPU you had. You knew how much storage you had. (You had paid for them, or leased them, and the invoice clearly listed the provided equipment.)
The earliest mainframe computers were (physically) large, expensive, and slow. Due to their expense, people strove to extract the most value from them. They did this by scheduling jobs in such a way as to keep the processor busy most of the time. The thinking was that an idle processor was an indication of over-paying for computing equipment. (And that thinking was correct. An idle processor was returning no value on the investment.)
Jobs were designed to fit in to the schedule. Rather than assume unlimited processing and storage and designing your system to meet business needs, you assumed a finite subset of available processing and storage and you designed your system to first fit within the constraints of the equipment and then (and only then) meet the business needs. If the true business needs required an application that was larger than the available processing and storage, you adjusted your business to fit within the equipment constraints.
Minicomputers allowed departments (not corporate headquarters) to acquire computing equipment, but again it was a fixed resource. You knew how much equipment you had. You knew the limits. Applications had to fit within resources.
The PC revolutions (first DOS, and later, Windows) distributed equipment to people, but kept the model of fixed-resources. The programs became more numerous but were still constrained to the "minimum system requirements".
In all of these models, we had to design systems to fit within fixed resources. The practice of allocating scarce resources, and the assumption that resources are scarce, has become ingrained in our thinking. So ingrained that no one questions it. Many companies have project planning methods and practices that assume fixed resources.
Cloud computing changes the assumption of fixed resources.
The big advantage of cloud computing is scalability: processing and storage are available on demand. Yes, they cost, but the incremental costs for processors and gigabytes are low.
Cloud computing's scalability works both ways. You can "scale up" during a "rush" period (say, the end-of-year holidays). Once the rush is over, you can scale down, reducing your use of processing and storage, and reducing your cost of equipment.
Previous hardware configurations did not have the ability to scale up and down. Mainframe computers were pretty much fixed, although upgrades were available. But upgrades were large, complex operations, involving the insertion (or replacement) of circuit boards and cables. Vendors made it (relatively) easy to upgrade and (relatively) hard to downgrade. And upgrades required advance planning -- they were certainly not available on demand.
PCs were better at upgrades than mainframes, in that upgrades were cheaper and could generally be completed within a few days.
In the days before cloud computing, that is when you purchased or long-term leased equipment, scaling up meant the acquisition of hardware. Once acquired, it was hard to scale down. (Possible, but definitely not easy.) You could do less with the equipment you had, but you were still paying for the expanded hardware.
Cloud computing, with its abilities to scale up and scale down, and to do so on demand, changes the game. It's no longer necessary to fit within the currently available hardware configuration. If you have a new application and it requires more CPU, you can accommodate it quickly. You do not need to tear apart your server room, install new servers, and upgrade your power distribution units. And if the new application is a bit of a flop, you can discard it and shrink your hardware down to the original configuration.
We've seen small companies cope with overnight success by using cloud computing. (Prior to cloud computing, companies that became wildly successful and outstripped their ability to expand their hardware quickly became unsuccessful, as performance slowed and customers were unable to complete transactions.)
The mindset of slow, steady, predictable growth can be replaced with one of growth as customers demand. The new mindset can even accept reductions in demand because costs have shifted from the "fixed" column to the "variable" column.
More than that, the mindset of "design to fit within known hardware" can be replaced with "design to meet business needs". This may be a difficult change. The first step is to acknowledge the assumption.
Wednesday, April 4, 2012
The cloud is not invincible
We tend to think that cloud-based systems as more reliable than our current server-based applications. And they can be, if they are designed properly.
Cloud-based systems use designs that distribute work to multiple servers. Instead of a single web server, you have multiple web servers with some form of load balancing. Instead of a single database server, you have multiple database servers with some form of synchronization. At first glance, it may seem quite similar to a sophisticated system hosted on your in-house servers.
But cloud-based systems are different. Cloud systems have several assumptions:
As long as all of these assumptions hold, we have a reliable system.
Yet the cloud is not invincible. Here are a few ways to build a fragile system:
Require a service all the time (that is, fail if something is not available)
It is easy to fail due to a simple missing service. For example, a web app may have a home page with some information and some widgets. Let's say that one of the widgets is a weather display, showing the current temperature and weather conditions for the user. (We can assume that the widget is informed of the user's location, so that it can request the local weather from the general service.)
If the weather web service is down (that is, not responding, or responding with invalid data), what does your web application do? Does it skip over the weather information (and provide sensible HTML)? Or does it lock in a loop, waiting for a valid response? Or worse, does the page builder throw an exception and terminate?
This problem can occur with internal or external services. Any server can go off-line, any service can become unavailable. How does your system survive the loss of services?
Assume that the cloud will provide servers to meet demand
A big advantage of the cloud is scalability: you get more servers when you need them. And this is true, for the most part. While cloud infrastructure does "scale up" and "scale down" to meet your processing load, the processing power is not guaranteed.
For example, you may have a contract that allows for scaling up to a specified limit. (Such limits are put in place to ensure that the monthly bill will remain within some agreed-upon figure.) If your demand exceeds the contractual limits, your systems will be constrained and your customers may see poor performance.
What warnings will you get about nearing your processing limit? What warnings will your system provide when performance starts to degrade?
Assume that the cloud is the only thing that needs to scale
Even if the cloud infrastructure (servers) scales, does the network capacity? For the big cloud providers, the answer is yes. Does yours?
Designing for the cloud is different than designing for in-house web systems. But not that much different; there is a lot of overlap between them. Use your experience from your existing systems. Think about the problems that cloud computing solves. Learn about the assumptions that no longer hold. Those assumptions work both ways; some are not problems, and some are risks.
You can build reliable cloud-based systems. But don't think that it happens "for free".
Cloud-based systems use designs that distribute work to multiple servers. Instead of a single web server, you have multiple web servers with some form of load balancing. Instead of a single database server, you have multiple database servers with some form of synchronization. At first glance, it may seem quite similar to a sophisticated system hosted on your in-house servers.
But cloud-based systems are different. Cloud systems have several assumptions:
- the system is distributed among multiple servers
- any layer can be served by multiple servers (there are no special, "magical" servers with unique data)
- any one server may go off-line at any moment
- the cloud can "spin up" a new instance of a server quickly
- requests between servers are queued and can be re-routed to other servers
As long as all of these assumptions hold, we have a reliable system.
Yet the cloud is not invincible. Here are a few ways to build a fragile system:
Require a service all the time (that is, fail if something is not available)
It is easy to fail due to a simple missing service. For example, a web app may have a home page with some information and some widgets. Let's say that one of the widgets is a weather display, showing the current temperature and weather conditions for the user. (We can assume that the widget is informed of the user's location, so that it can request the local weather from the general service.)
If the weather web service is down (that is, not responding, or responding with invalid data), what does your web application do? Does it skip over the weather information (and provide sensible HTML)? Or does it lock in a loop, waiting for a valid response? Or worse, does the page builder throw an exception and terminate?
This problem can occur with internal or external services. Any server can go off-line, any service can become unavailable. How does your system survive the loss of services?
Assume that the cloud will provide servers to meet demand
A big advantage of the cloud is scalability: you get more servers when you need them. And this is true, for the most part. While cloud infrastructure does "scale up" and "scale down" to meet your processing load, the processing power is not guaranteed.
For example, you may have a contract that allows for scaling up to a specified limit. (Such limits are put in place to ensure that the monthly bill will remain within some agreed-upon figure.) If your demand exceeds the contractual limits, your systems will be constrained and your customers may see poor performance.
What warnings will you get about nearing your processing limit? What warnings will your system provide when performance starts to degrade?
Assume that the cloud is the only thing that needs to scale
Even if the cloud infrastructure (servers) scales, does the network capacity? For the big cloud providers, the answer is yes. Does yours?
Designing for the cloud is different than designing for in-house web systems. But not that much different; there is a lot of overlap between them. Use your experience from your existing systems. Think about the problems that cloud computing solves. Learn about the assumptions that no longer hold. Those assumptions work both ways; some are not problems, and some are risks.
You can build reliable cloud-based systems. But don't think that it happens "for free".
Tuesday, March 27, 2012
Programming language as amplifier, or not
Studies have shown that different programmers perform at different levels. Not all programmers are alike, not even programmers with the same job title. The difference between a really good programmer and a really poor programmer has been measured to be a factor of twenty-five!
What I have not seen is a study of programming languages and their role in these differences.
I believe some programming languages to be equalizers and other programming languages to be amplifiers. Some programming languages can make programmers better, or at least allow them to be more productive. Other programming languages limit them, bunching programmers together.
I noticed the difference in programming languages when I shifted from C to C++. The C++ language was more than a "better C" or even a "C with classes" arrangement. It allowed one to use more sophisticated constructs, to develop programs that were more complex. As some folks said, "with C you can shoot yourself in the foot, with C++ you now have a machine gun".
C++ is a powerful language, and good programmers can use it to good effect. Poor programmers, on the other hand, frequently end with messy programs that are difficult to understand and maintain (often with defects, too).
C++ is an amplifying language: good programmers are better, poor programmers are worse.
But that does not hold for all languages.
FORTRAN and COBOL are equalizing languages. (That is, the early versions of these languages were equalizers.) They reduce the difference between good and poor programmers. The structure of the languages constrains both types of programmers and the code is pretty much the same, regardless of the programmer's skill. (Later versions of FORTRAN moved it closer to an amplifying language.)
Some other programming languages:
Assembly language is an amplifier. While the "trick" to good programming in assembly language is understanding the processor and the instruction set, assembly language programming is such that a good programmer is really good and a poor programmer has a very difficult time.
Pascal is an equalizer. It has enough "guardrails" in place to prevent a poor programmer from making a mess. Yet those same guardrails prevent a good programmer from truly excelling.
Perl is an amplifier. Python is an amplifier. Ruby is an amplifier.
Java and C# are equalizers, although they are shifting towards the amplifier end of the spectrum. Sun changes Java and Microsoft changes C#, and the changes add features (such as lambdas) which become the machine guns for shooting yourself in the foot.
Viewed in the light of "amplifier" or "equalizer", one can assess programming languages for risk. An amplifying language can allow programmers to do wonderful things, but it also allows them to create a horrible mess. When using an amplifying language, you have to take steps to ensure the former and prevent the latter. An equalizing language, on the other hand, limits the possibility of mess (while also limiting the possibility of something wonderful).
But if you don't care about something wonderful, if you want to deliver a known quantity on known schedule (and the quantity and schedule are reasonable), then an equalizing language is better for you. It allows you to hire not-so-great programmers and you know that they will delivery something of reasonable quality.
If my reasoning is true, then we can expect small shops (especially start-ups) to use the amplifying languages. They have to, since they need above-average results. In contrast, large conservative shops will use equalizing languages. They will (most likely) be unwilling to hire top talent and will opt for mediocre (but available) personnel. They will also (most likely) be unwilling to educate and develop
Capabilities of the language are one factor among many. The C++ programming language become popular not because it was an equalizer (it's not) nor because it was an amplifier -- it became popular because it was the way to develop applications for Windows and Microsoft supplied good tools for it. That is no longer the case. The primary language for Windows applications is now C#. The primary language for iOS applications is Objective-C. The primary language for Android applications is Java. Yet programs for all of these platforms can be developed in other languages.
With today's multi-language market, expect companies to select the tool that suits their needs. Companies that need top-level performance will pick the amplifying languages. Companies that need certainty and want to avoid risk will pick the equalizing languages.
What I have not seen is a study of programming languages and their role in these differences.
I believe some programming languages to be equalizers and other programming languages to be amplifiers. Some programming languages can make programmers better, or at least allow them to be more productive. Other programming languages limit them, bunching programmers together.
I noticed the difference in programming languages when I shifted from C to C++. The C++ language was more than a "better C" or even a "C with classes" arrangement. It allowed one to use more sophisticated constructs, to develop programs that were more complex. As some folks said, "with C you can shoot yourself in the foot, with C++ you now have a machine gun".
C++ is a powerful language, and good programmers can use it to good effect. Poor programmers, on the other hand, frequently end with messy programs that are difficult to understand and maintain (often with defects, too).
C++ is an amplifying language: good programmers are better, poor programmers are worse.
But that does not hold for all languages.
FORTRAN and COBOL are equalizing languages. (That is, the early versions of these languages were equalizers.) They reduce the difference between good and poor programmers. The structure of the languages constrains both types of programmers and the code is pretty much the same, regardless of the programmer's skill. (Later versions of FORTRAN moved it closer to an amplifying language.)
Some other programming languages:
Assembly language is an amplifier. While the "trick" to good programming in assembly language is understanding the processor and the instruction set, assembly language programming is such that a good programmer is really good and a poor programmer has a very difficult time.
Pascal is an equalizer. It has enough "guardrails" in place to prevent a poor programmer from making a mess. Yet those same guardrails prevent a good programmer from truly excelling.
Perl is an amplifier. Python is an amplifier. Ruby is an amplifier.
Java and C# are equalizers, although they are shifting towards the amplifier end of the spectrum. Sun changes Java and Microsoft changes C#, and the changes add features (such as lambdas) which become the machine guns for shooting yourself in the foot.
Viewed in the light of "amplifier" or "equalizer", one can assess programming languages for risk. An amplifying language can allow programmers to do wonderful things, but it also allows them to create a horrible mess. When using an amplifying language, you have to take steps to ensure the former and prevent the latter. An equalizing language, on the other hand, limits the possibility of mess (while also limiting the possibility of something wonderful).
But if you don't care about something wonderful, if you want to deliver a known quantity on known schedule (and the quantity and schedule are reasonable), then an equalizing language is better for you. It allows you to hire not-so-great programmers and you know that they will delivery something of reasonable quality.
If my reasoning is true, then we can expect small shops (especially start-ups) to use the amplifying languages. They have to, since they need above-average results. In contrast, large conservative shops will use equalizing languages. They will (most likely) be unwilling to hire top talent and will opt for mediocre (but available) personnel. They will also (most likely) be unwilling to educate and develop
Capabilities of the language are one factor among many. The C++ programming language become popular not because it was an equalizer (it's not) nor because it was an amplifier -- it became popular because it was the way to develop applications for Windows and Microsoft supplied good tools for it. That is no longer the case. The primary language for Windows applications is now C#. The primary language for iOS applications is Objective-C. The primary language for Android applications is Java. Yet programs for all of these platforms can be developed in other languages.
With today's multi-language market, expect companies to select the tool that suits their needs. Companies that need top-level performance will pick the amplifying languages. Companies that need certainty and want to avoid risk will pick the equalizing languages.
Friday, March 23, 2012
The default solution
For decades, mainframes were the default solution to computing problems. When you needed something done, you did it on a mainframe, unless you had a compelling reason for a different platform.
For decades, IBM called the shots in the computer industry. The popularity of IBM hardware gave IBM the ability to strongly influence (some might say dictate) hardware and software standards. That power diminished with the rise of personal computers (ironically helped by the IBM PC). IBM ceded the control of software to Microsoft, first with DOS and later with Windows.
For decades, PCs were the default solution to computing problems. When you needed something done, you did it on a PC, unless you had a compelling reason for a different platform.
For decades, Microsoft called the shots. The popularity of Windows and Office gave Microsoft the ability to strongly influence (some might say dictate) hardware and software standards. That power diminished with the rise of hand-held computers (specifically iPods and iPhones). Microsoft ceded the market to Apple, after several failed attempts at moving Windows to hand-sized devices.
Now, smartphones and tablets are the default solution to computing problems. When you need something done, you do it on a smartphone or tablet, unless you have a compelling reason for a different platform.
The popular platforms are the default solutions, and the company with the dominant platform can set the standards and the direction of the technology. Notice that it is the popular platform that defines the default solution, not the most cost-effective or the most reliable. The default solution is defined by the market, specifically what customers are buying. It is not a democracy, but neither is it an inherited rank. A company has a leadership role because the market gives that company the role.
And the market can take away that role.
The change in the market from mainframe to PC was an expansion, not a revolution.
The events that unseated IBM were not market revolutions, in which one competitor replaced another. IBM the mainframe manufacturer was not ousted by another mainframe manufacturer.They defended themselves against competitors, but failed to expand to new markets.
The PC revolution expanded the market. (It may have killed dedicated word processing systems, but overall it expanded the market.) The new market of word processing software, spreadsheets, and even primitive databases was something that IBM did not pursue with mainframes. It is possible that IBM was unable to pursue that market, as the PCs were small, inexpensive, and purchased by people who did not have a squadron of lawyers to review purchase and support contracts.
The market expanded but mainframes stayed constant, and that allowed PCs to become the default solution.
We have a similar situation with PCs and tablets.
The smartphone revolution (along with tablets) is expanding the market. The new market of location-aware apps, easy-to-install apps, and touchscreen interfaces is a market that Microsoft is only now beginning to pursue with Windows 8 and the Metro UI, and this effort is by no means guaranteed. (Many long-time supporters of Microsoft are grumbling at Windows 8.)
The market is expanding and PCs are mostly staying constant. That allows smartphones to become the default solution.
But PCs are not simply sitting still. PCs, and more specifically, PC operating systems, are adapting the ideas of the smartphone market. Microsoft's Windows 8 is the most prominent example of this effect, with its new GUI and the new Microsoft Windows App Store. Apple's "Lion" release of OSX bring it closer to smartphone operating systems. Some Linux distributions are morphing their user interfaces to something closer to smartphones and are simplifying their package managers.
In the end, I think PCs will have a limited role. Data centers have never been fond of the tower-style units, preferring rack-mounted servers and now preferring virtual PCs running on mainframes, of all things! Home users will find that smartphones and tablets less expensive, easier to use, and good enough to get the job done. Corporate users are the last bastion of PCs, and even they are looking at smartphones and tablets in the "Bring Your Own Device" movement.
PCs won't die out. Some tasks are handled by PCs better than on tablets. (Just as some tasks are handled by mainframes better than PCs, even today.) Some people will keep them because they are "tried and true" solutions, others will be unwilling to move to different platforms. Hobbyists will keep them out of nostalgia.
But they won't be the default solution.
For decades, IBM called the shots in the computer industry. The popularity of IBM hardware gave IBM the ability to strongly influence (some might say dictate) hardware and software standards. That power diminished with the rise of personal computers (ironically helped by the IBM PC). IBM ceded the control of software to Microsoft, first with DOS and later with Windows.
For decades, PCs were the default solution to computing problems. When you needed something done, you did it on a PC, unless you had a compelling reason for a different platform.
For decades, Microsoft called the shots. The popularity of Windows and Office gave Microsoft the ability to strongly influence (some might say dictate) hardware and software standards. That power diminished with the rise of hand-held computers (specifically iPods and iPhones). Microsoft ceded the market to Apple, after several failed attempts at moving Windows to hand-sized devices.
Now, smartphones and tablets are the default solution to computing problems. When you need something done, you do it on a smartphone or tablet, unless you have a compelling reason for a different platform.
The popular platforms are the default solutions, and the company with the dominant platform can set the standards and the direction of the technology. Notice that it is the popular platform that defines the default solution, not the most cost-effective or the most reliable. The default solution is defined by the market, specifically what customers are buying. It is not a democracy, but neither is it an inherited rank. A company has a leadership role because the market gives that company the role.
And the market can take away that role.
The change in the market from mainframe to PC was an expansion, not a revolution.
The events that unseated IBM were not market revolutions, in which one competitor replaced another. IBM the mainframe manufacturer was not ousted by another mainframe manufacturer.They defended themselves against competitors, but failed to expand to new markets.
The PC revolution expanded the market. (It may have killed dedicated word processing systems, but overall it expanded the market.) The new market of word processing software, spreadsheets, and even primitive databases was something that IBM did not pursue with mainframes. It is possible that IBM was unable to pursue that market, as the PCs were small, inexpensive, and purchased by people who did not have a squadron of lawyers to review purchase and support contracts.
The market expanded but mainframes stayed constant, and that allowed PCs to become the default solution.
We have a similar situation with PCs and tablets.
The smartphone revolution (along with tablets) is expanding the market. The new market of location-aware apps, easy-to-install apps, and touchscreen interfaces is a market that Microsoft is only now beginning to pursue with Windows 8 and the Metro UI, and this effort is by no means guaranteed. (Many long-time supporters of Microsoft are grumbling at Windows 8.)
The market is expanding and PCs are mostly staying constant. That allows smartphones to become the default solution.
But PCs are not simply sitting still. PCs, and more specifically, PC operating systems, are adapting the ideas of the smartphone market. Microsoft's Windows 8 is the most prominent example of this effect, with its new GUI and the new Microsoft Windows App Store. Apple's "Lion" release of OSX bring it closer to smartphone operating systems. Some Linux distributions are morphing their user interfaces to something closer to smartphones and are simplifying their package managers.
In the end, I think PCs will have a limited role. Data centers have never been fond of the tower-style units, preferring rack-mounted servers and now preferring virtual PCs running on mainframes, of all things! Home users will find that smartphones and tablets less expensive, easier to use, and good enough to get the job done. Corporate users are the last bastion of PCs, and even they are looking at smartphones and tablets in the "Bring Your Own Device" movement.
PCs won't die out. Some tasks are handled by PCs better than on tablets. (Just as some tasks are handled by mainframes better than PCs, even today.) Some people will keep them because they are "tried and true" solutions, others will be unwilling to move to different platforms. Hobbyists will keep them out of nostalgia.
But they won't be the default solution.
Subscribe to:
Posts (Atom)