There is an interesting psychological difference between "real" computers (mainframes, servers, and PCs) and the smartphones and tablets we use for mobile computing.
In short, "real" computers are temples of worship, and mobile computers are servants.
Mainframe computers have long seen the metaphor of religion, with their attendants referenced as "high priests". Personal computers have not seen such comparisons, but I think the "temple" metaphor holds. (Or perhaps we should say "shrine".)
"Real" computers are non-mobile. They are fixed in place. When we use a computer, we go to the computer. The one exception is laptops, which we can consider to be a portable shrine.
Mobile computers, in contrast, come with us. We do not go to them; they are nearby and ready for our requests.
Tablets and smartphones are intimate. They come with us to the grocery store, the exercise club, and the library. Mainframes of course do not come with us anywhere, and personal computers stay at home. Laptops occasionally come with us, but only with significant effort. (Carry bag, laptop, power adapter and cable, extra VGA cable, VGA-DVI adapter, and goodness-know-what.)
It's nice to visit a temple, but it's nicer to have a ready and capable servant.
Wednesday, May 8, 2013
Monday, May 6, 2013
A Risk of Big Data: Armchair Statisticians
In the mid-1980s, laser printers became affordable, word processor software became more capable, and many people found that they were able to publish their own documents. They proceeded to do so. Some showed restraint in the use of fonts; others created documents that were garish.
In the mid-1990s, web pages became affordable, web page design software became more capable, and many people found that they were able to create their own web sites. They proceeded to do so. Some showed restraint in the use of fonts, colors, and the blink tag; others created web sites that were hideous.
In the mid-2010s, storage became cheap, data became collectable, analysis tools became capable, and I suspect many people will find that they are able to collect and analyze large quantities of data. I further predict that many will do so. Some will show restraint in their analyses; others will collect some (almost) random data and create results that are less than correct.
The biggest risk of Big Data may be the amateur. Professional statisticians understand the data, understand the methods used to analyze the data, and understand the limits of those analyses. Armchair statisticians know enough to analysis the data but not enough to criticize the analysis. This is a problem because it is easy to mis-interpret the results.
Typical errors are:
It is easy to make these errors, which is why the professionals take such pains to evaluate their work. Note that none of these are obvious in the results.
When the cost of performing these analyses was high, only the professionals could play. The cost of such analyses is dropping, which means that amateurs can play. And their results will look (at first glance) just as pretty as the professionals.
In desktop publishing and web page design, it was easy to separate the professionals and the amateurs. The visual aspects of the finished product were obvious.
With big data, it is hard to separate the two. The visual aspects of the final product do not show the workmanship of the analysis. (They show the workmanship of the presentation tool.)
Be prepared for the coming flood of presentations. And be prepared to ask some hard questions about the data and the analyses. It is the only way you will be able to tell the wheat from the chaff.
In the mid-1990s, web pages became affordable, web page design software became more capable, and many people found that they were able to create their own web sites. They proceeded to do so. Some showed restraint in the use of fonts, colors, and the blink tag; others created web sites that were hideous.
In the mid-2010s, storage became cheap, data became collectable, analysis tools became capable, and I suspect many people will find that they are able to collect and analyze large quantities of data. I further predict that many will do so. Some will show restraint in their analyses; others will collect some (almost) random data and create results that are less than correct.
The biggest risk of Big Data may be the amateur. Professional statisticians understand the data, understand the methods used to analyze the data, and understand the limits of those analyses. Armchair statisticians know enough to analysis the data but not enough to criticize the analysis. This is a problem because it is easy to mis-interpret the results.
Typical errors are:
- Omitting relevant data (or including irrelevant data) due to incorrect "select" operations.
- Identifying correlation as causation. (In an economic downturn, the unemployment rate increases as does the payments for unemployment insurance. But the UI payments do not cause the UI rate; both are driven by the economy.)
- Identifying the reverse of a causal relationship (Umbrellas do not cause rain.)
- Improper summary operations (Such as calculating an average of a quantized value like processor speed. You most likely want either the median or the mode.)
It is easy to make these errors, which is why the professionals take such pains to evaluate their work. Note that none of these are obvious in the results.
When the cost of performing these analyses was high, only the professionals could play. The cost of such analyses is dropping, which means that amateurs can play. And their results will look (at first glance) just as pretty as the professionals.
In desktop publishing and web page design, it was easy to separate the professionals and the amateurs. The visual aspects of the finished product were obvious.
With big data, it is hard to separate the two. The visual aspects of the final product do not show the workmanship of the analysis. (They show the workmanship of the presentation tool.)
Be prepared for the coming flood of presentations. And be prepared to ask some hard questions about the data and the analyses. It is the only way you will be able to tell the wheat from the chaff.
Labels:
big data,
data analysis,
desktop publishing,
web page design
Thursday, May 2, 2013
Our fickleness on the important aspects of programs
Over time, we have changed our desire in program attributes. If we divide the IT age into four eras, we can see this change. Let's consider the four eras to be mainframe, PC, web, and mobile/cloud. These four eras used different technology and different languages, and praised different accomplishments.
In the mainframe era, we focussed on raw efficiency. We measured CPU usage, memory usage, and disk usage. We strove to have enough CPU, memory, and disk, with some to spare but not too much. Hardware was expensive, and too much spare capacity meant that you were paying for more than you needed.
In the PC era we focussed not on efficiency but on user-friendliness. We built applications with help screens and menus. We didn't care too much about efficiency -- many people left PCs powered on overnight, with no "jobs" running.
With web applications, we focussed on globalization, with efficiency as a sub-goal. The big effort was in the delivery of an application to a large quantity of users. This meant translation into multiple languages, the "internationalization" of an application, support for multiple browsers, and support for multiple time zones. But we didn't want to overload our servers, either, so early Perl CGI applications were quickly converted to C or other languages for performance.
With applications for mobile/cloud, we desire two aspects: For mobile apps (that is, the 'UI' portion), we want something easier than "user-friendly". The operation of an app must not merely be simple, it must be obvious. For cloud apps (that is, the server portion), we want scalability. An app must not be monolithic, but assembled from collaborative components.
The objectives for systems vary from era to era. Performance was a highly measured aspect in the mainframe era, and almost ignored in the PC era.
The shift from one era to another may be difficult for practitioners. Programmers in one era may be trained to "optimize" their code for the dominant aspect. (In the mainframe era, they would optimize for performance.) A succeeding era would demand other aspects in their systems, and programmers may not be aware of the change. Thus, a highly-praised mainframe programmer with excellent skills at algorithm design, when transferred to a PC project may find that his skills are not desired or recognized. His code may receive a poor review, since the expectation for PC systems is "user friendly" and his skills from mainframe programming do not provide that aspect.
Similarly, a skilled PC programmer may have difficulties when moving to web or mobile/cloud systems. The expectations for user interface, architecture, and efficiency are quite different.
Practitioners who start with a later era (for example, the 'young turks' starting with mobile/cloud) may find it difficult to comprehend the reasoning of programmers from an earlier era. Why do mainframe programmers care about the order of mathematical operations? Why do PC programmers care so much about in-memory data structures, to the point of writing their own?
The answers are that, at the time, these were important aspects of programs. They were pounded into the programmers of earlier eras, to a degree that those programmers design their code without thinking about these optimizations.
Experienced programmers must look at the new system designs and the context of those designs. Mobile/cloud needs scalability, and therefore needs collaborative components. The monolithic designs that optimized memory usage are unsuitable to the new environment. Experienced programmers must recognize their learned biases and discard those that are not useful in the new era. (Perhaps we can consider this a problem of cache invalidation.)
Younger programmers would benefit from a deeper understanding of the earlier eras. Art students learn study the conditions (and politics) of the old masters. Architects study the buildings of the Greeks, Romans, and medieval kingdoms. Programmers familiar with the latest era, and only the latest era, will have a difficult time communicating with programmers of earlier eras.
Each era has objectives and constraints. Learn about those objectives and constraints, and you will find a deeper appreciation of programs and a greater ability to communicate with other programmers.
In the mainframe era, we focussed on raw efficiency. We measured CPU usage, memory usage, and disk usage. We strove to have enough CPU, memory, and disk, with some to spare but not too much. Hardware was expensive, and too much spare capacity meant that you were paying for more than you needed.
With web applications, we focussed on globalization, with efficiency as a sub-goal. The big effort was in the delivery of an application to a large quantity of users. This meant translation into multiple languages, the "internationalization" of an application, support for multiple browsers, and support for multiple time zones. But we didn't want to overload our servers, either, so early Perl CGI applications were quickly converted to C or other languages for performance.
With applications for mobile/cloud, we desire two aspects: For mobile apps (that is, the 'UI' portion), we want something easier than "user-friendly". The operation of an app must not merely be simple, it must be obvious. For cloud apps (that is, the server portion), we want scalability. An app must not be monolithic, but assembled from collaborative components.
The objectives for systems vary from era to era. Performance was a highly measured aspect in the mainframe era, and almost ignored in the PC era.
The shift from one era to another may be difficult for practitioners. Programmers in one era may be trained to "optimize" their code for the dominant aspect. (In the mainframe era, they would optimize for performance.) A succeeding era would demand other aspects in their systems, and programmers may not be aware of the change. Thus, a highly-praised mainframe programmer with excellent skills at algorithm design, when transferred to a PC project may find that his skills are not desired or recognized. His code may receive a poor review, since the expectation for PC systems is "user friendly" and his skills from mainframe programming do not provide that aspect.
Similarly, a skilled PC programmer may have difficulties when moving to web or mobile/cloud systems. The expectations for user interface, architecture, and efficiency are quite different.
Practitioners who start with a later era (for example, the 'young turks' starting with mobile/cloud) may find it difficult to comprehend the reasoning of programmers from an earlier era. Why do mainframe programmers care about the order of mathematical operations? Why do PC programmers care so much about in-memory data structures, to the point of writing their own?
The answers are that, at the time, these were important aspects of programs. They were pounded into the programmers of earlier eras, to a degree that those programmers design their code without thinking about these optimizations.
Experienced programmers must look at the new system designs and the context of those designs. Mobile/cloud needs scalability, and therefore needs collaborative components. The monolithic designs that optimized memory usage are unsuitable to the new environment. Experienced programmers must recognize their learned biases and discard those that are not useful in the new era. (Perhaps we can consider this a problem of cache invalidation.)
Younger programmers would benefit from a deeper understanding of the earlier eras. Art students learn study the conditions (and politics) of the old masters. Architects study the buildings of the Greeks, Romans, and medieval kingdoms. Programmers familiar with the latest era, and only the latest era, will have a difficult time communicating with programmers of earlier eras.
Each era has objectives and constraints. Learn about those objectives and constraints, and you will find a deeper appreciation of programs and a greater ability to communicate with other programmers.
Sunday, April 28, 2013
C++ without source (cpp) files
A thought experiment: can we have C++ programs without source files (that it, without .cpp files)?
The typical C++ program consists of header files (.h) and source files (.cpp). The header files provide definitions for classes, and the source files provide the definition of the implementations.
Yet the C++ language allows one to define function implementation in the header files. We typically see this only for short functions. To wit:
random_file.h
class random_class
{
private:
int foo_;
public:
random_class( int foo ) : foo_(foo);
int foo( void ) { return foo_ };
}
This code defines a small class that contains a single value and has no methods. The sole member variable is initialized in the constructor.
Here's my idea: Using the concepts of functional programming (namely immutable variables that are initialized in the constructor), one can define a class as a constructor and a bunch of read-only accessors.
If we keep class size to a minimum, we can define all classes in header files. The constructors are simple, and the accessor functions simply return calculated values. There is no need for long methods.
(Yes, we could define long functions in headers, but that seems to be cheating. We allow short functions in headers and exile long functions into .cpp files.)
Such a design is, I think, possible, although perhaps impractical. It may be similar to the chemists' "perfect gas", an abstraction that is nice to conceive but unseen in the real world.
Yet a "perfect gas" of a class (perhaps a "perfect class") may be possible for some classes in a program. Those perfect classes would be small, with few member variables and only accessor functions. Its values would be immutable. The member variables may be objects of smaller classes (perhaps perfect classes) with immutable values of their own.
This may be a way to improve code quality. My experience shows that immutable objects are much easier to code, to use, and to debug. If we build simple immutable classes, then we can code them in header files and we can discard the source files.
Coding without source files -- no there is an idea for the future.
The typical C++ program consists of header files (.h) and source files (.cpp). The header files provide definitions for classes, and the source files provide the definition of the implementations.
Yet the C++ language allows one to define function implementation in the header files. We typically see this only for short functions. To wit:
random_file.h
class random_class
{
private:
int foo_;
public:
random_class( int foo ) : foo_(foo);
int foo( void ) { return foo_ };
}
This code defines a small class that contains a single value and has no methods. The sole member variable is initialized in the constructor.
Here's my idea: Using the concepts of functional programming (namely immutable variables that are initialized in the constructor), one can define a class as a constructor and a bunch of read-only accessors.
If we keep class size to a minimum, we can define all classes in header files. The constructors are simple, and the accessor functions simply return calculated values. There is no need for long methods.
(Yes, we could define long functions in headers, but that seems to be cheating. We allow short functions in headers and exile long functions into .cpp files.)
Such a design is, I think, possible, although perhaps impractical. It may be similar to the chemists' "perfect gas", an abstraction that is nice to conceive but unseen in the real world.
Yet a "perfect gas" of a class (perhaps a "perfect class") may be possible for some classes in a program. Those perfect classes would be small, with few member variables and only accessor functions. Its values would be immutable. The member variables may be objects of smaller classes (perhaps perfect classes) with immutable values of their own.
This may be a way to improve code quality. My experience shows that immutable objects are much easier to code, to use, and to debug. If we build simple immutable classes, then we can code them in header files and we can discard the source files.
Coding without source files -- no there is an idea for the future.
Saturday, April 27, 2013
"Not invented here" works poorly with cloud services
This week, colleagues were discussing the "track changes" feature of Microsoft Word. They are building an automated system that uses Word at its core, and they were encountering problems with the "track changes" feature.
This problem lead me to think about system design.
Microsoft Word, while it has a COM-based engine and separate UI layer, is a single, all-in-one, solution for word processing. Every function that you need (or that Microsoft thinks you need) is included in the package.
This design has advantages. Once Word is installed, you have access to every feature. All of the features work together. A new version of Word upgrades all of the features -- none are left behind.
Yet this design is a form of "not invented here". Microsoft supplies the user interface, the spell-check engine and dictionary, the "track changes" feature, and everything else. Even when there were other solutions available, Microsoft built their own. (Or bought an existing solution and welded it into Word.)
Word's design is also closed. One cannot, for example, replace Microsoft's spell-checker with another one. Nor can one replace the "track changes" feature with your own version control system. You are stuck with the entire package.
This philosophy worked for desktop PC software. It works poorly with cloud computing.
In cloud computing, every feature in your system is a service. Instead of a monolithic program, a system is a collection of services, each providing some small amount of well-defined processing. Cloud computing needs this design to scale to larger workloads; you can add more servers for the services that see more demand.
With a system built of services, you must decide on the visibility of those services. Are they open to all? Closed to only your processes? Or do you allow a limited set of users (perhaps subscribers) to use them?
Others must make this decision too. The US Postal Service may provide services for address validation, separate and independent from mailing letters. Thus companies like UPS and FedEx may choose to use those services rather than build their own.
Some companies already do this. Twitter provides information via its API. Lots of start-ups provide information and data.
Existing companies and organizations provide data, or will do so in the future. The government agency NOAA may provide weather information. The New York Stock Exchange may provide stock prices and trade information (again, perhaps only to subscribers). Banks may provide loan payment calculations.
You can choose to build a system in the cloud with only your data and services. Or you can choose to use data and services provided by others. Both have advantages (and risks).
But the automatic reflex of "not invented here" has no place in cloud system design. Evaluate your options and weigh the benefits.
This problem lead me to think about system design.
Microsoft Word, while it has a COM-based engine and separate UI layer, is a single, all-in-one, solution for word processing. Every function that you need (or that Microsoft thinks you need) is included in the package.
This design has advantages. Once Word is installed, you have access to every feature. All of the features work together. A new version of Word upgrades all of the features -- none are left behind.
Yet this design is a form of "not invented here". Microsoft supplies the user interface, the spell-check engine and dictionary, the "track changes" feature, and everything else. Even when there were other solutions available, Microsoft built their own. (Or bought an existing solution and welded it into Word.)
Word's design is also closed. One cannot, for example, replace Microsoft's spell-checker with another one. Nor can one replace the "track changes" feature with your own version control system. You are stuck with the entire package.
This philosophy worked for desktop PC software. It works poorly with cloud computing.
In cloud computing, every feature in your system is a service. Instead of a monolithic program, a system is a collection of services, each providing some small amount of well-defined processing. Cloud computing needs this design to scale to larger workloads; you can add more servers for the services that see more demand.
With a system built of services, you must decide on the visibility of those services. Are they open to all? Closed to only your processes? Or do you allow a limited set of users (perhaps subscribers) to use them?
Others must make this decision too. The US Postal Service may provide services for address validation, separate and independent from mailing letters. Thus companies like UPS and FedEx may choose to use those services rather than build their own.
Some companies already do this. Twitter provides information via its API. Lots of start-ups provide information and data.
Existing companies and organizations provide data, or will do so in the future. The government agency NOAA may provide weather information. The New York Stock Exchange may provide stock prices and trade information (again, perhaps only to subscribers). Banks may provide loan payment calculations.
You can choose to build a system in the cloud with only your data and services. Or you can choose to use data and services provided by others. Both have advantages (and risks).
But the automatic reflex of "not invented here" has no place in cloud system design. Evaluate your options and weigh the benefits.
Wednesday, April 24, 2013
Perhaps history should be taught backwards
A recent trip to the local computer museum (a decent place with mechanical computation equipment, various microcomputers and PCs, a DEC PDP-8/m and PDP-12, and a Univac 460) gave me time to think about our techniques for teaching history.
The curator gave us a tour, and he started with the oldest computing devices in his collection. Those devices included abaci, Napier's Bones, a slide rule, and electro-mechanical calculators. We then progressed forward in time, looking at the Univac and punch cards, the PDP-8 and PDP-12 and Teletype terminals, then the Apple II and Radio Shack TRS-80 microcomputers, and ended with modern-day PCs and tablets.
The tour was a nice, orderly progression through time,
And perhaps the wrong sequence.
A number of our group were members of the younger set. They had worked with smart phones and iPads and PCs, but nothing earlier than that. I suspect that for them, the early parts of the tour -- the early computation technologies -- were difficult.
Computing technologies have changed over time. Even the concept of computing has changed. Early devices (abaci, slide rules, and even hand-held calculators) were used to perform mathematical operations; the person knew the theory and overall purpose of the computations.
Today, we use computing devices for many purposes, and the underlying computations are distant (and hidden) from the user. Our purposes are not merely word processing and spreadsheets, but web pages, Twitter feeds, and games. (And blog posts.)
The technologies we use to calculate are different: today's integrated circuits do the work of 1960's large discrete electronics, which do the work of lots of wheels and cogs of a mechanical calculator. Even storage has changed: today's flash RAM holds data; in the 1970s it was core memory; in the 1950s it was mercury delay lines.
The changes in technology were mostly gradual with a few large jumps. Yet the technologies of today are sufficiently different from the early technologies that one is not recognizable from the other. Moving from today to the beginning requires a big jump in understanding.
Which is why I question the sequence of history. For computing technology, starting at the beginning requires a good understanding of the existing technology and techniques, and even then it is hard to see how an abacus or slide rule relates to today's smart phone.
Perhaps we should move in the reverse direction. Perhaps we should start with today's technology, on the assumption that people know about it, and work backwards. We can move to slightly older systems and compare them to today's technology. Then repeat the process, moving back into the past.
For example, storage. Showing someone a punch card, for example, means very little (unless they know the history). But leading that same person through technology (SD RAM chips, USB memory sticks, CD-ROMs, floppy disks, early floppy disks, magnetic drums, magnetic tape, paper tape, and eventually punch cards) might give that person an easier time. For someone who does not know the tech, studying something close to current technology (CD-ROMs) and then learning about the previous tech (floppy disks) might be better. It avoids the big leap into the past.
The curator gave us a tour, and he started with the oldest computing devices in his collection. Those devices included abaci, Napier's Bones, a slide rule, and electro-mechanical calculators. We then progressed forward in time, looking at the Univac and punch cards, the PDP-8 and PDP-12 and Teletype terminals, then the Apple II and Radio Shack TRS-80 microcomputers, and ended with modern-day PCs and tablets.
The tour was a nice, orderly progression through time,
And perhaps the wrong sequence.
A number of our group were members of the younger set. They had worked with smart phones and iPads and PCs, but nothing earlier than that. I suspect that for them, the early parts of the tour -- the early computation technologies -- were difficult.
Computing technologies have changed over time. Even the concept of computing has changed. Early devices (abaci, slide rules, and even hand-held calculators) were used to perform mathematical operations; the person knew the theory and overall purpose of the computations.
Today, we use computing devices for many purposes, and the underlying computations are distant (and hidden) from the user. Our purposes are not merely word processing and spreadsheets, but web pages, Twitter feeds, and games. (And blog posts.)
The technologies we use to calculate are different: today's integrated circuits do the work of 1960's large discrete electronics, which do the work of lots of wheels and cogs of a mechanical calculator. Even storage has changed: today's flash RAM holds data; in the 1970s it was core memory; in the 1950s it was mercury delay lines.
The changes in technology were mostly gradual with a few large jumps. Yet the technologies of today are sufficiently different from the early technologies that one is not recognizable from the other. Moving from today to the beginning requires a big jump in understanding.
Which is why I question the sequence of history. For computing technology, starting at the beginning requires a good understanding of the existing technology and techniques, and even then it is hard to see how an abacus or slide rule relates to today's smart phone.
Perhaps we should move in the reverse direction. Perhaps we should start with today's technology, on the assumption that people know about it, and work backwards. We can move to slightly older systems and compare them to today's technology. Then repeat the process, moving back into the past.
For example, storage. Showing someone a punch card, for example, means very little (unless they know the history). But leading that same person through technology (SD RAM chips, USB memory sticks, CD-ROMs, floppy disks, early floppy disks, magnetic drums, magnetic tape, paper tape, and eventually punch cards) might give that person an easier time. For someone who does not know the tech, studying something close to current technology (CD-ROMs) and then learning about the previous tech (floppy disks) might be better. It avoids the big leap into the past.
Sunday, April 21, 2013
The post-PC era is about coolness or lack thereof
Some have pointed to the popularity of tablets as the indicator for the imminent demise of the PC. I look at the "post PC" era not as the death of the PC, but as something worse: PCs have become boring.
Looking back, we can see that PCs started with lots of excitement and enthusiasm, yet that excitement has diminished over time.
First, consider hardware:
These were exciting improvements. These changes were *cool*. But the coolness factor has evaporated. Consider these new technologies:
Consider operating systems:
But the coolness factor declined with Windows XP and its successors:
The loss of coolness is not limited to Microsoft. A similar effect happened with Apple's operating systems.
But the Mac OSX versions have not been clearly better than their predecessors. They have some nice features, but the improvements are small, and a significant number of people might say that the latest OSX is not better than the prior version.
The problem for PCs (including Apple Macintosh PCs) is the loss of coolness. Tablets are cool; PCs are boring. The "arc of coolness" for PCs saw its greatest rise in the 1980s and 1990s, a moderate rise in the 2000s, and now sees decline.
This is the meaning of the "post PC era". It's not that we give up PCs. It's that PCs become dull and routine. PC applications become dull and routine.
It also means that there will be few new things developed for PCs. In a sense, this happened long ago, with the development of the web. Then, the Cool New Things were developed to run on servers and in browsers. Now, the Cool New Things will be developed for the mobile/cloud platform.
So don't expect PCs and existing PC applications to vanish. They will remain; it is too expensive to re-build them on the mobile/cloud platform.
But don't expect new PC applications.
Welcome to the post-PC era.
Looking back, we can see that PCs started with lots of excitement and enthusiasm, yet that excitement has diminished over time.
First, consider hardware:
- Microcomputers were cool (even with just a front panel and a tape drive for storage)
- ASCII terminals were clearly better than front panels
- Storing data on floppy disks was clearly better than storing data on tape
- Hard drives were better than floppy disks
- Color monitors were better than monochrome displays
- High resolution color monitors were better than low resolution color monitors
- Flat panel monitors were better than CRT monitors
These were exciting improvements. These changes were *cool*. But the coolness factor has evaporated. Consider these new technologies:
- LED monitors are better than LCD monitors, if you're tracking power consumption
- Solid state drives are better than hard drives, if you look hard enough
- Processors after the original Pentium are nice, but not excitingly nice
Consider operating systems:
- CP/M was exciting (as anyone who ran it could tell you)
- MS-DOS was clearly better than CP/M
- Windows 3.1 on DOS was clearly better than plain MS-DOS
- Windows 95 was clearly better than Windows 3.1
- Windows NT (or 2000) was clearly better than Windows 95 (or 98, or ME)
But the coolness factor declined with Windows XP and its successors:
- Windows XP was *nice* but not *cool*
- Windows Vista was not clearly better than Windows XP -- and many have argued that it was worse
- Windows 7 was better than Windows Vista, in that it fixed problems
- Windows 8 is (for most people) not cool
The loss of coolness is not limited to Microsoft. A similar effect happened with Apple's operating systems.
- DOS (Apple's DOS for Apple ][ computers) was cool
- MacOS was clearly better than DOS
- MacOS 9 was clearly better than MacOS 8
- Mac OSX was clearly better than MacOS 9
But the Mac OSX versions have not been clearly better than their predecessors. They have some nice features, but the improvements are small, and a significant number of people might say that the latest OSX is not better than the prior version.
The problem for PCs (including Apple Macintosh PCs) is the loss of coolness. Tablets are cool; PCs are boring. The "arc of coolness" for PCs saw its greatest rise in the 1980s and 1990s, a moderate rise in the 2000s, and now sees decline.
This is the meaning of the "post PC era". It's not that we give up PCs. It's that PCs become dull and routine. PC applications become dull and routine.
It also means that there will be few new things developed for PCs. In a sense, this happened long ago, with the development of the web. Then, the Cool New Things were developed to run on servers and in browsers. Now, the Cool New Things will be developed for the mobile/cloud platform.
So don't expect PCs and existing PC applications to vanish. They will remain; it is too expensive to re-build them on the mobile/cloud platform.
But don't expect new PC applications.
Welcome to the post-PC era.
Labels:
Apple DOS,
coolness,
CP/M,
Mac OSX,
MacOS,
Microsoft Windows,
mobile/cloud,
MS-DOS,
post-PC era,
web applications
Subscribe to:
Posts (Atom)