We seem to repeat lessons of technology.
The latest lesson is one from the 1980s: The PC revolution. Personal computers introduced the notion of smaller, numerous computers. Previously, the notion of computers revolved around mainframe computers: large, centralized, and expensive. (I'm ignoring minicomputers, which were smaller, less centralized, and less expensive.)
The PC revolution was less a change from mainframes to PCs and more a change in mindset. The revolution made the notion of small computers a reasonable one. After PCs arrived, the "space" of computing expanded to include mainframes and PCs. Small computers were considered legitimate.
That lesson -- that computing can come in small packages as well as large ones -- can be applied to cloud data centers. The big cloud providers (Amazon.com, Microsoft, IBM, etc.) have been built large data centers. And large is an apt description: enormous buildings containing racks and racks of servers, power supply distribution units, air conditioning... and more. The facilities may vary between the players: the hypervisors, operating systems, administration systems are all different among them. But the one factor they have in common is that they are all large.
I'm not sure that data centers have to be large. They certainly don't have to be monolithic. Cloud providers maintain multiple centers ("regions", "zones", "service areas") to provide redundancy in the event of physical disasters. But aside from the issue of redundancy, it seems that the big cloud providers are thinking in mainframe terms. They build large, centralized, (and expensive) data centers.
Large, centralized mainframe computers make sense for large, centralized mainframe programs.
Cloud systems are different from mainframe programs. They are not large, centralized programs. A properly designed cloud system consists of small, distinct programs tied together by data stores and message queues. A cloud system becomes big by scaling -- by increasing the number of copies of web servers and applications -- and not by growing a single program or single database.
A large cloud system can exist on a cloud platform that lives in one large data center. For critical systems, we want redundancy, so we arrange for multiple data centers. This is easy with cloud systems, as the system can expand by creating new instances of servers, not necessarily in the same data center.
A large cloud system doesn't need a single large data center, though. A large cloud system, with its many instances of small servers, can just as easily live in a set of small data centers (provided that there are enough servers to host the virtual servers).
I think we're in for an expansion of mindset, the same expansion that we saw with personal computers. Cloud providers will expand their data centers with small- and medium-sized data centers.
I'm ignoring two aspects here. One is communications: network transfers are faster in a single data center than across multiple centers. But how many applications are that sensitive to time? The other aspect is the efficiency of smaller data centers. It is probably cheaper, on a per-server basis, to build large data centers. Small data centers will have to take advantage to something, like an existing small building that requires no major construction.
Cloud systems, even large cloud systems, don't need large data centers.
Sunday, June 21, 2015
Sunday, June 14, 2015
Data services are more flexible than files
Data services provide data. So do files. But the two are very different.
In the classic PC world ("classic" meaning desktop applications), the primary storage mechanism is the file. A file is, at its core, a bunch of bytes. Not just a random collection of bytes, but a meaningful collection. That collection could be a text file, a document, a spreadsheet, or any one of a number of possibilities.
In the cloud world, the primary storage mechanism is the data service. That could be an SQL database, a NoSQL database, or a web service (a data service). A data service provides a collection of values, not a collection of bytes.
Data services are active things. They can perform operations. A data service is much like a query in an SQL database. (One may think of SQL as a data service, if one likes.) You can specify a subset of the data (either columns or rows, or both), the sequence in which the data appears (again, either columns or rows, or both), and the format of the data. For sophisticated services, you can collect data from multiple sources.
Data services are much more flexible and powerful than files.
But that's not what is interesting about data services.
What is interesting about data services is the mindset of the programmer.
When a programmer is working with data files, he must think about what he needs, what is in the file, and how to extract what he needs from the file. The file may have extra data (unwanted data rows, or perhaps undesired headings and footings). The file may have extra columns of data. The data may be in a sequence different from the desired sequence. The data may be in a format that is different from what is needed.
The programmer must compensate for all of these things, and write code to handle the unwanted data or the improper formats. Working with files means writing code to match the file.
In contrast, data services -- well-designed data services -- can format the data, filter the data, and clean the data for the programmer. Data services have capabilities that files do not; they are active and can perform operations.
A programmer using files must think "what does the file provide, and how can I convert it to what I need?"; a programmer using data services thinks "what do I need?".
With data services, the programmer can think less about what is available and think more about what has to be done with the data. If you're a programmer or a manager, you understand how this change makes programmers more efficient.
If you're writing code or managing projects, think about data services. Even outside of the cloud, data services can reduce the programming effort.
In the classic PC world ("classic" meaning desktop applications), the primary storage mechanism is the file. A file is, at its core, a bunch of bytes. Not just a random collection of bytes, but a meaningful collection. That collection could be a text file, a document, a spreadsheet, or any one of a number of possibilities.
In the cloud world, the primary storage mechanism is the data service. That could be an SQL database, a NoSQL database, or a web service (a data service). A data service provides a collection of values, not a collection of bytes.
Data services are active things. They can perform operations. A data service is much like a query in an SQL database. (One may think of SQL as a data service, if one likes.) You can specify a subset of the data (either columns or rows, or both), the sequence in which the data appears (again, either columns or rows, or both), and the format of the data. For sophisticated services, you can collect data from multiple sources.
Data services are much more flexible and powerful than files.
But that's not what is interesting about data services.
What is interesting about data services is the mindset of the programmer.
When a programmer is working with data files, he must think about what he needs, what is in the file, and how to extract what he needs from the file. The file may have extra data (unwanted data rows, or perhaps undesired headings and footings). The file may have extra columns of data. The data may be in a sequence different from the desired sequence. The data may be in a format that is different from what is needed.
The programmer must compensate for all of these things, and write code to handle the unwanted data or the improper formats. Working with files means writing code to match the file.
In contrast, data services -- well-designed data services -- can format the data, filter the data, and clean the data for the programmer. Data services have capabilities that files do not; they are active and can perform operations.
A programmer using files must think "what does the file provide, and how can I convert it to what I need?"; a programmer using data services thinks "what do I need?".
With data services, the programmer can think less about what is available and think more about what has to be done with the data. If you're a programmer or a manager, you understand how this change makes programmers more efficient.
If you're writing code or managing projects, think about data services. Even outside of the cloud, data services can reduce the programming effort.
Monday, June 8, 2015
OS X is special enough to get names
Apple has several product lines, and two operating systems: Mac OS and iOS. (Or perhaps three, with Apple Watch OS being the third.) The operating systems have different origins, different designs, and, interestingly, different marketing. Releases of iOS and WatchOS are numbered; releases of Mac OS are named.
Why this distinction? Why should releases of Mac OS be graced with names ("Panther", "Tiger", "Mavericks", "El Capitan") and other operating systems limited to plain numbers?
The assignment of names to Mac OS is a marketing decision. Apple clearly believes that the users of Mac OS want names, while the users of iOS and WatchOS do not. They may be right.
The typical user of iOS and WatchOS is an ordinary, non-technical person. Apple iPads and iPhones and Watches are made for "normal", non-technical individuals.
The typical user of Mac OS, on the other hand, is not a normal, non-technical individual. At least, not in Apple's eyes. Apple may right on this item. I have seen programmers, web designers, and developers carry Apple MacBooks. Software conferences are full of software geeks, and many carry Apple MacBooks. (Only a few carry iPads.)
If this is true, then the audience for Mac OS is different from the audience for iOS. And if that is true, then Apple has an incentive to keep Mac OS separate from iOS. So we may see separate paths (and features) for Mac OS and iOS (and WatchOS).
When Apple releases a version of Mac OS without a cute code name, then we can assume that Apple is getting ready to merge Mac OS into iOS.
Why this distinction? Why should releases of Mac OS be graced with names ("Panther", "Tiger", "Mavericks", "El Capitan") and other operating systems limited to plain numbers?
The assignment of names to Mac OS is a marketing decision. Apple clearly believes that the users of Mac OS want names, while the users of iOS and WatchOS do not. They may be right.
The typical user of iOS and WatchOS is an ordinary, non-technical person. Apple iPads and iPhones and Watches are made for "normal", non-technical individuals.
The typical user of Mac OS, on the other hand, is not a normal, non-technical individual. At least, not in Apple's eyes. Apple may right on this item. I have seen programmers, web designers, and developers carry Apple MacBooks. Software conferences are full of software geeks, and many carry Apple MacBooks. (Only a few carry iPads.)
If this is true, then the audience for Mac OS is different from the audience for iOS. And if that is true, then Apple has an incentive to keep Mac OS separate from iOS. So we may see separate paths (and features) for Mac OS and iOS (and WatchOS).
When Apple releases a version of Mac OS without a cute code name, then we can assume that Apple is getting ready to merge Mac OS into iOS.
Sunday, June 7, 2015
Code quality doesn't matter today
In the 1990s, people cared about code quality. We held code reviews and developed metrics to measure code. We debated the different methods of measuring code: lines of code, cyclomatic complexity, function points, and more. Today, there is little interest in code metrics, or in code quality.
I have several possible explanations.
Agile methods Specifically, people believe that agile methods provide high-quality code (and therefore there is no need to measure it). This is possible; most advocates of agile tout the reduction in defects, and many people equate the lack of defects with high quality. While the re-factoring that occurs (or should occur) in agile methods, it doesn't guarantee high quality. Without measurements, how do we know?
Managers don't care More specifically, managers are focussed on other aspects of the development process. They care more about the short-term cost, or features, or cloud management.
Managers see little value in code It is possible that managers think that code is a temporary thing, something that must be constantly re-written. If it has a short expected life, there is little incentive to build quality code.
I have one more idea:
We don't know what makes good code good In the 1990s and 2000s, we built code in C++, Java, and later, C#. Those languages are designed on object-oriented principles, and we know what makes good code good. We know it so well that we can build tools to measure that code. The concept of "goodness" is well understood.
We've moved to other languages. Today we build systems in Python, Ruby, and JavaScript. These languages are more dynamic than C++, C#, and Java. Goodness in these languages is elusive. What is "good" JavaScript? What designs are good for Ruby? or Python? Many times, programming concepts are good in a specific context and not-so-good in a different context. Evaluating the goodness of a program requires more than just the code, it requires knowledge of the business problem.
So it is possible that we've advanced our programming languages to the point that we cannot evaluate the quality of our programs, at least temporarily. I have no doubt that code metrics and code quality will return.
I have several possible explanations.
Agile methods Specifically, people believe that agile methods provide high-quality code (and therefore there is no need to measure it). This is possible; most advocates of agile tout the reduction in defects, and many people equate the lack of defects with high quality. While the re-factoring that occurs (or should occur) in agile methods, it doesn't guarantee high quality. Without measurements, how do we know?
Managers don't care More specifically, managers are focussed on other aspects of the development process. They care more about the short-term cost, or features, or cloud management.
Managers see little value in code It is possible that managers think that code is a temporary thing, something that must be constantly re-written. If it has a short expected life, there is little incentive to build quality code.
I have one more idea:
We don't know what makes good code good In the 1990s and 2000s, we built code in C++, Java, and later, C#. Those languages are designed on object-oriented principles, and we know what makes good code good. We know it so well that we can build tools to measure that code. The concept of "goodness" is well understood.
We've moved to other languages. Today we build systems in Python, Ruby, and JavaScript. These languages are more dynamic than C++, C#, and Java. Goodness in these languages is elusive. What is "good" JavaScript? What designs are good for Ruby? or Python? Many times, programming concepts are good in a specific context and not-so-good in a different context. Evaluating the goodness of a program requires more than just the code, it requires knowledge of the business problem.
So it is possible that we've advanced our programming languages to the point that we cannot evaluate the quality of our programs, at least temporarily. I have no doubt that code metrics and code quality will return.
Monday, June 1, 2015
Waterfall or Agile -- or both
A lot has been written (and argued) about waterfall methods and agile methods. Each has advocates. Each has detractors. I take a different path: they are two techniques for managing projects, and you may want to use both.
Waterfall, the traditional method of analysis, design, coding, testing, and deployment, makes the promise of a specific deliverable at a specific time (and at a specific cost). Agile, the "young upstart" makes the promise of frequent deliverables that function -- although only with what has been coded, not necessarily everything you may want.
Waterfall and agile operate in different ways and work in different situations. Waterfall works well when you and your team know the technology, the tools, the existing code, and the business rules. Agile works when you and your team are exploring new areas (technology, code, business rules, or a combination). Agile provides the flexibility to change direction quickly, where waterfall locks you in to a plan.
Waterfall does not work when there are unknowns. A new technology, for example. Or a new team looking at an existing code base. Or perhaps significant changes to business rules (where "significant" is smaller than you think.) Waterfall's approach to define everything up front cannot handle the uncertainties, and its schedules are likely to fail.
If your shop has been building web applications and you decide to switch to mobile apps, you have a lot of uncertainties. New technologies, new designs for applications, and changes to existing web services are required. You may be unable to list all of the tasks for the project, much less assign reasonable estimates for resources and time. If your inputs are uncertain, how can the resulting plan be anything but uncertain?
In that situation, it is better to use agile methods to learn the new technologies. Complete some small projects, perhaps for internal use, that use the tools for mobile development. Gain experience with them. Learn the hazards and understand the risks.
When you have experience, use waterfall to plan your projects. With experience behind you, your estimates will be better.
You don't have to use waterfall or agile exclusively. Some projects (perhaps many) require some exploration and research. That surveying is best done with agile methods. Once the knowledge is learned, once the team is familiar with the technology and the code, a waterfall project makes good business sense. (You have to deliver on time, don't you?)
As a manager, you have two tools to plan and manage projects. Use them effectively.
Waterfall, the traditional method of analysis, design, coding, testing, and deployment, makes the promise of a specific deliverable at a specific time (and at a specific cost). Agile, the "young upstart" makes the promise of frequent deliverables that function -- although only with what has been coded, not necessarily everything you may want.
Waterfall and agile operate in different ways and work in different situations. Waterfall works well when you and your team know the technology, the tools, the existing code, and the business rules. Agile works when you and your team are exploring new areas (technology, code, business rules, or a combination). Agile provides the flexibility to change direction quickly, where waterfall locks you in to a plan.
Waterfall does not work when there are unknowns. A new technology, for example. Or a new team looking at an existing code base. Or perhaps significant changes to business rules (where "significant" is smaller than you think.) Waterfall's approach to define everything up front cannot handle the uncertainties, and its schedules are likely to fail.
If your shop has been building web applications and you decide to switch to mobile apps, you have a lot of uncertainties. New technologies, new designs for applications, and changes to existing web services are required. You may be unable to list all of the tasks for the project, much less assign reasonable estimates for resources and time. If your inputs are uncertain, how can the resulting plan be anything but uncertain?
In that situation, it is better to use agile methods to learn the new technologies. Complete some small projects, perhaps for internal use, that use the tools for mobile development. Gain experience with them. Learn the hazards and understand the risks.
When you have experience, use waterfall to plan your projects. With experience behind you, your estimates will be better.
You don't have to use waterfall or agile exclusively. Some projects (perhaps many) require some exploration and research. That surveying is best done with agile methods. Once the knowledge is learned, once the team is familiar with the technology and the code, a waterfall project makes good business sense. (You have to deliver on time, don't you?)
As a manager, you have two tools to plan and manage projects. Use them effectively.
Thursday, May 28, 2015
Windows needs easier upgrades
Microsoft, after years of dominance in the market, now faces competition. That competition, in the form of Apple's Mac OSX and in the form of Linux, forces Microsoft to make some changes.
One area for change is the update process for Windows. Microsoft needs to improve their game in this area.
I have several PCs; three of them run Windows. A relatively modern desktop runs Windows 8.1, a slightly older laptop runs Windows 7, and an ancient tower unit runs Windows XP. They all started with those same versions of Windows (except for the modern desktop which started with Windows 8 and was later upgraded to Windows 8.1).
In addition to the PCs running Windows, I have several PCs running Ubuntu Linux: two laptops running the "desktop" version and three tower PCs running the "server" version.
Ubuntu Linux provides new versions every six months. They have gotten quite good at it. Each April and October, new versions are released. Each April and October, my Ubuntu systems display messages indicating that new versions are available. The server versions, which use a command-line interface, display a simple message at sign-on, along with the command to download and install the new version. The desktop versions, which use a graphic interface, display a dialog with a button that says, roughly, "upgrade now".
Ubuntu makes it easy to upgrade. The current system informs me of the upgrade and provides the instructions to install it. The process is simple: download the new package, install it, and re-start the computer. (It is the only time I have to re-start Linux.)
Windows, in contrast, offers no such support. While the Windows 8 system did download and install the Windows 8.1 update, the Windows 7 machine has said nothing about an upgrade for Windows 8. And the Windows XP machine hums along quietly, too, mentioning nothing about upgrades. (To be fair, the hardware in that ancient PC is not sufficient for Windows 8, so maybe it knows what it is doing.)
I'm not asking for free updates to Windows 8. I recognize that Canonical and Microsoft have different business models. Canonical does not charge for updates (or even the first install) of Ubuntu Linux; Microsoft charges for a new install and each major upgrade. Paying for an update should be a simple affair: one is really paying for an activation code and the software just happens to come along.
Ubuntu Linux also provides a path for old, out-of-support versions. I installed version 11.10, which ran and promply told me that it was out of support, and also prompted me to upgrade. Imagine installing Windows XP today: would it prompt you to upgrade to a later version? (Ubuntu upgrades through versions; the Windows equivalent would be to upgrade from Windows XP to Windows Vista and then to Windows 7.)
Canonical has raised the bar for operating system updates. They work, they are simple, and they encourage people to move to supported versions. Microsoft must match this level of support in their products. The benefit for Microsoft is that people move to the latest version of Windows, which improves their uptake rate. The benefit for users is that they ... move to the latest version of Windows, which provides the latest security patches.
Corporations and large shops may choose to wait for upgrades. They may wish to test them and then roll them out to their users. That's possible too, through Windows' group policies. Individual users, through, have little to lose.
One area for change is the update process for Windows. Microsoft needs to improve their game in this area.
I have several PCs; three of them run Windows. A relatively modern desktop runs Windows 8.1, a slightly older laptop runs Windows 7, and an ancient tower unit runs Windows XP. They all started with those same versions of Windows (except for the modern desktop which started with Windows 8 and was later upgraded to Windows 8.1).
In addition to the PCs running Windows, I have several PCs running Ubuntu Linux: two laptops running the "desktop" version and three tower PCs running the "server" version.
Ubuntu Linux provides new versions every six months. They have gotten quite good at it. Each April and October, new versions are released. Each April and October, my Ubuntu systems display messages indicating that new versions are available. The server versions, which use a command-line interface, display a simple message at sign-on, along with the command to download and install the new version. The desktop versions, which use a graphic interface, display a dialog with a button that says, roughly, "upgrade now".
Ubuntu makes it easy to upgrade. The current system informs me of the upgrade and provides the instructions to install it. The process is simple: download the new package, install it, and re-start the computer. (It is the only time I have to re-start Linux.)
Windows, in contrast, offers no such support. While the Windows 8 system did download and install the Windows 8.1 update, the Windows 7 machine has said nothing about an upgrade for Windows 8. And the Windows XP machine hums along quietly, too, mentioning nothing about upgrades. (To be fair, the hardware in that ancient PC is not sufficient for Windows 8, so maybe it knows what it is doing.)
I'm not asking for free updates to Windows 8. I recognize that Canonical and Microsoft have different business models. Canonical does not charge for updates (or even the first install) of Ubuntu Linux; Microsoft charges for a new install and each major upgrade. Paying for an update should be a simple affair: one is really paying for an activation code and the software just happens to come along.
Ubuntu Linux also provides a path for old, out-of-support versions. I installed version 11.10, which ran and promply told me that it was out of support, and also prompted me to upgrade. Imagine installing Windows XP today: would it prompt you to upgrade to a later version? (Ubuntu upgrades through versions; the Windows equivalent would be to upgrade from Windows XP to Windows Vista and then to Windows 7.)
Canonical has raised the bar for operating system updates. They work, they are simple, and they encourage people to move to supported versions. Microsoft must match this level of support in their products. The benefit for Microsoft is that people move to the latest version of Windows, which improves their uptake rate. The benefit for users is that they ... move to the latest version of Windows, which provides the latest security patches.
Corporations and large shops may choose to wait for upgrades. They may wish to test them and then roll them out to their users. That's possible too, through Windows' group policies. Individual users, through, have little to lose.
Labels:
Microsoft Windows,
support,
system upgrades,
Ubuntu Linux
Tuesday, May 26, 2015
When technology is not the limit
The early days of computing were all about limits. Regardless of the era you pick (mainframe, minicomputer, PC, client-server, etc.) the systems were constrained and imposed hard limits on computations. CPUs were limited in speed. Memory was limited to small sizes. Disks for storage were expensive, so people used the smallest disk they could and stored as much as possible on cheaper tape.
These limitations showed through to applications.
Text editors could handle a small amount of text at one time. Some were limited to that amount and could handle only files of that size (or smaller). Other editors would "page out" a block of text and "page in" the next block, letting you work on one section of the text at a time, but the page operations worked only in the forward direction -- there was no "going back" to a previous block.
Compilers would allow for programs of only limited sizes (the limits dependent on the memory and storage available). Early FORTRAN compilers used only the first six characters of identifiers (variable names and function names) and ignored the remainder, so the variables DVALUES1 and DVALUES2 were considered to be the same variable.
In those days, programming required knowledge not only of the language but also of the system limitations. The constraints were a constant pressure, a ceiling that could not be exceeded. Such limitations drove much innovation; we were constantly yearning for more powerful instruction sets, larger memories, and more capacious and faster storage. Over time, we achieved those goals.
The history of the PC shows such growth. The original IBM PC was equipped with an 8088 CPU, a puny (by today's standards) processor that could not even handle floating-point numbers. While the processor could handle 1 MB of memory, the computer came equipped with only 64 KB of RAM and 64 KB of ROM. The display was a simple arrangement, with either high-resolution text only monochrome or low-resolution graphics in color.
Over the years, PCs acquired more powerful processors, larger address spaces, more memory, larger disk drives (well, larger capacities but smaller physical forms), and better displays.
We are at the point where a number of applications have been "solved", that is, they are not constrained by technology. Text editors can hold the entire document (up to several gigabytes) in memory and allow sophisticated editing commands. The limits on editors have been expanded such that we do not notice them.
Word processing, too, has been solved. Today's word processing systems can handle just about any function: wrapping text to column widths, accounting for typeface variations and kerning, indexing and auto-numbering, ... you name it.
Audio processing, e-mail, web browsing, ... all of these have enough technology to get the job done. We no longer look for a larger processor or more memory to solve our problems.
Which leads to an interesting conclusion: When our technology can handle our needs, an advance in technology will not help us.
A faster processor will not help our word processors. More memory will not help us with e-mail. (When one drives in suburbia on 30 MPH roads, a Honda Civic is sufficient, and a Porsche provides no benefits.)
I recognize that there are some applications that would benefit from faster processors and "more" technology. Big data (possibly, although cloud systems seems to be handling that). Factorization of numbers, for code-breaking. Artificial Intelligence (although that may be more a problem of algorithms and not raw hardware).
For the average user, today's PCs, Chromebooks, and tablets are good enough. They get the job done.
I think that this explains the longevity of Windows XP. It was a "good enough" operating system running on "good enough" hardware, supporting "good enough" applications.
Looking forward, people will have little incentive to switch from 64-bit processors to larger models (128-bit? super-scaled? variable-bit?) because they will offer little in the way of an improved experience.
The market pressure for larger systems will evaporate. What takes its place? What will drive innovation?
I see two things to spur innovation in the market: cost and security. People will look for systems with lower cost. Businesses especially are price-conscious and look to reduce expenses.
The other area is security. With more "security events" (data exposures, security breaches, and viruses) people are becoming more aware of the need for secure systems. Increased security (if there is a way to measure security) will be a selling point.
So instead of faster processors and more memory, look for cheaper systems and more secure (possibly not cheaper) offerings.
These limitations showed through to applications.
Text editors could handle a small amount of text at one time. Some were limited to that amount and could handle only files of that size (or smaller). Other editors would "page out" a block of text and "page in" the next block, letting you work on one section of the text at a time, but the page operations worked only in the forward direction -- there was no "going back" to a previous block.
Compilers would allow for programs of only limited sizes (the limits dependent on the memory and storage available). Early FORTRAN compilers used only the first six characters of identifiers (variable names and function names) and ignored the remainder, so the variables DVALUES1 and DVALUES2 were considered to be the same variable.
In those days, programming required knowledge not only of the language but also of the system limitations. The constraints were a constant pressure, a ceiling that could not be exceeded. Such limitations drove much innovation; we were constantly yearning for more powerful instruction sets, larger memories, and more capacious and faster storage. Over time, we achieved those goals.
The history of the PC shows such growth. The original IBM PC was equipped with an 8088 CPU, a puny (by today's standards) processor that could not even handle floating-point numbers. While the processor could handle 1 MB of memory, the computer came equipped with only 64 KB of RAM and 64 KB of ROM. The display was a simple arrangement, with either high-resolution text only monochrome or low-resolution graphics in color.
Over the years, PCs acquired more powerful processors, larger address spaces, more memory, larger disk drives (well, larger capacities but smaller physical forms), and better displays.
We are at the point where a number of applications have been "solved", that is, they are not constrained by technology. Text editors can hold the entire document (up to several gigabytes) in memory and allow sophisticated editing commands. The limits on editors have been expanded such that we do not notice them.
Word processing, too, has been solved. Today's word processing systems can handle just about any function: wrapping text to column widths, accounting for typeface variations and kerning, indexing and auto-numbering, ... you name it.
Audio processing, e-mail, web browsing, ... all of these have enough technology to get the job done. We no longer look for a larger processor or more memory to solve our problems.
Which leads to an interesting conclusion: When our technology can handle our needs, an advance in technology will not help us.
A faster processor will not help our word processors. More memory will not help us with e-mail. (When one drives in suburbia on 30 MPH roads, a Honda Civic is sufficient, and a Porsche provides no benefits.)
I recognize that there are some applications that would benefit from faster processors and "more" technology. Big data (possibly, although cloud systems seems to be handling that). Factorization of numbers, for code-breaking. Artificial Intelligence (although that may be more a problem of algorithms and not raw hardware).
For the average user, today's PCs, Chromebooks, and tablets are good enough. They get the job done.
I think that this explains the longevity of Windows XP. It was a "good enough" operating system running on "good enough" hardware, supporting "good enough" applications.
Looking forward, people will have little incentive to switch from 64-bit processors to larger models (128-bit? super-scaled? variable-bit?) because they will offer little in the way of an improved experience.
The market pressure for larger systems will evaporate. What takes its place? What will drive innovation?
I see two things to spur innovation in the market: cost and security. People will look for systems with lower cost. Businesses especially are price-conscious and look to reduce expenses.
The other area is security. With more "security events" (data exposures, security breaches, and viruses) people are becoming more aware of the need for secure systems. Increased security (if there is a way to measure security) will be a selling point.
So instead of faster processors and more memory, look for cheaper systems and more secure (possibly not cheaper) offerings.
Subscribe to:
Posts (Atom)