The typical policy for corporate networks is simple: corporation-supplied equipment is allowed, and everything else is forbidden. Do not attach your own computers or cell phones, do not connect your own tablet computers, do not plug in your own thumb drives. Only corporate-approved (and corporate-supplied) equipment is allowed, because that enables security.
The typical policy for corporate networks is changing.
This change has been brought about by reality. Corporations cannot keep up with the plethora of devices available (iPods, iPads, Android phones, tablets, ... what have you...) but must improve efficiency of their employees. New devices improve that efficiency.
In the struggle between security and efficiency... the winner is efficiency.
IBM is allowing employees to attach their own equipment to the corporate network. This makes sense for IBM, since they advise other companies in the effective use of resources. IBM *has* to make this work, in order for them to retain credibility. After all, if IBM cannot make this work, they cannot counsel other companies and advise that those companies open their networks to employee-owned equipment.
Non-consulting corporations (that is, most corporations) don't have the pressure to make this change. They can choose to keep their networks "pure" and free from non-approved equipment.
For a while.
Instead of marketing pressure, companies will face pressure from within. It will come from new hires, who expect to use their smartphones and tablets. It will come from "average" employees, who want to use readily-available equipment to get the job done.
More and more, people within the company will question the rules put in place by the IT group, rules that limit their choices of hardware.
And once "alien" hardware is approved, software will follow. At first, the software will be the operating systems and closely-bound utilities (Mac OSX and iTunes, for example). Eventually, the demand for other utilities (Google Docs, Google App Engine, Python) will overwhelm the IT forces holding back the tide.
IT can approach this change with grace, or with resistance. But face it they will, and adjust to it they must.
Monday, October 31, 2011
Wednesday, October 26, 2011
Small is the new big thing
Applications are big, out of necessity. Apps are small, and should be.
Applications are programs that do everything you need. Microsoft Word and Microsoft Excel are applications: They let you compose documents (or spreadsheets), manipulate them, and store them. Visual Studio is an application: It lets you compose programs, compile them, and test them. Everything you need is baked into the application, except for the low-level functionality provided by the operating system.
Apps, in contrast, contain just enough logic to get the desired data and present it to the user.
A smartphone app is not a complete application; except for the most trivial of programs, it is the user interface to an application.
The Facebook app is a small program that talks to Facebook servers and presents data. Twitter apps talk to the Twitter servers. The New York Times talks to their servers. Simple apps such as a calculator app or rudimentary games can run without back-ends, but I suspect that popular games like "Angry Birds" store data on servers.
Applications contained everything: core logic, user interface, and data storage. Apps are components in a larger system.
We've seen distributed systems before: client-server systems and web applications divide data storage and core logic from user interface and validation logic. These application designs allowed for a single front-end; current system design allows for multiple user interfaces: iPhone, iPad, Android, and web. Multiple front ends are necessary; there is no clear leader, no "IBM PC" standard.
To omit a popular platform is to walk away from business.
Small front ends are better than large front ends. A small, simple front end can be ported quickly to new platforms. It can be updated more rapidly, to stay competitive. Large, complex apps can be ported to new platforms, but as with everything else, a large program requires more effort to port.
Small apps allow a company to move quickly to new platforms.
With a dynamic market of user interface devices, an effective company must adopt new platforms or face reduced revenue. Small user interfaces (apps) allow a company to quickly adopt new platforms.
If you want to succeed, think small.
Applications are programs that do everything you need. Microsoft Word and Microsoft Excel are applications: They let you compose documents (or spreadsheets), manipulate them, and store them. Visual Studio is an application: It lets you compose programs, compile them, and test them. Everything you need is baked into the application, except for the low-level functionality provided by the operating system.
Apps, in contrast, contain just enough logic to get the desired data and present it to the user.
A smartphone app is not a complete application; except for the most trivial of programs, it is the user interface to an application.
The Facebook app is a small program that talks to Facebook servers and presents data. Twitter apps talk to the Twitter servers. The New York Times talks to their servers. Simple apps such as a calculator app or rudimentary games can run without back-ends, but I suspect that popular games like "Angry Birds" store data on servers.
Applications contained everything: core logic, user interface, and data storage. Apps are components in a larger system.
We've seen distributed systems before: client-server systems and web applications divide data storage and core logic from user interface and validation logic. These application designs allowed for a single front-end; current system design allows for multiple user interfaces: iPhone, iPad, Android, and web. Multiple front ends are necessary; there is no clear leader, no "IBM PC" standard.
To omit a popular platform is to walk away from business.
Small front ends are better than large front ends. A small, simple front end can be ported quickly to new platforms. It can be updated more rapidly, to stay competitive. Large, complex apps can be ported to new platforms, but as with everything else, a large program requires more effort to port.
Small apps allow a company to move quickly to new platforms.
With a dynamic market of user interface devices, an effective company must adopt new platforms or face reduced revenue. Small user interfaces (apps) allow a company to quickly adopt new platforms.
If you want to succeed, think small.
Monday, October 24, 2011
Steve Jobs, Dennis Ritchie, John McCarthy, and Daniel McCracken
We lost four significant people from the computing world this year.
Steve Jobs needed no introduction. Everyone new him as that slightly crazy guy from Apple, the one who would show off new products while always wearing a black mock-turtleneck shirt.
Dennis Ritchie was well-known by the geeks. Articles comparing him to Steve Jobs were wrong: Ritchie co-created Unix and C somewhat before Steve Jobs founded Apple. Many languages (C++, Java, C#) are descendants of C. Linux, Android, Apple iOS, and Apple OSX are descendants of Unix.
John McCarthy was know by the true geeks. He built a lot of AI, and created a language called LISP. Modern languages (Python, Ruby, Scala, and even C# and C++) are beginning to incorporate ideas from the LISP language.
Daniel McCracken is the unsung hero of the group. He is unknown even among true geeks. His work predates the others (except McCarthy), and had a greater influence on the industry than possibly all of them. McCracken wrote books on FORTRAN and COBOL, books that were understandable and comprehensive. He made it possible for the very early programmers to learn their craft -- not just the syntax but the craft of programming.
The next time you write a "for" loop with the control variable named "i", or see a "for" loop with the control variable named "i", you can thank Daniel McCracken. It was his work that set that convention and taught the first set of programmers.
Steve Jobs needed no introduction. Everyone new him as that slightly crazy guy from Apple, the one who would show off new products while always wearing a black mock-turtleneck shirt.
Dennis Ritchie was well-known by the geeks. Articles comparing him to Steve Jobs were wrong: Ritchie co-created Unix and C somewhat before Steve Jobs founded Apple. Many languages (C++, Java, C#) are descendants of C. Linux, Android, Apple iOS, and Apple OSX are descendants of Unix.
John McCarthy was know by the true geeks. He built a lot of AI, and created a language called LISP. Modern languages (Python, Ruby, Scala, and even C# and C++) are beginning to incorporate ideas from the LISP language.
Daniel McCracken is the unsung hero of the group. He is unknown even among true geeks. His work predates the others (except McCarthy), and had a greater influence on the industry than possibly all of them. McCracken wrote books on FORTRAN and COBOL, books that were understandable and comprehensive. He made it possible for the very early programmers to learn their craft -- not just the syntax but the craft of programming.
The next time you write a "for" loop with the control variable named "i", or see a "for" loop with the control variable named "i", you can thank Daniel McCracken. It was his work that set that convention and taught the first set of programmers.
Labels:
Apple,
books,
C,
COBOL,
Daniel McCracken,
Dennis Ritchie,
Fortran,
John McCarthy,
LISP,
steve jobs,
Unix,
unsung hero
Sunday, October 23, 2011
Functional programming pays off (part 2)
We continue to gain from our use of functional programming techniques.
Using just the "immutable object" technique, we've improved our code and made our programming lives easier. Immutable objects have given us two benefits this week.
The first benefit: less code. We revised our test framework to use immutable objects. Rather than instantiating a test object (which exercises the true object under test) and asking it to run the tests, we now instantiate the test object and it runs the tests immediately. We then simply ask it for the results. Our new code is simpler than before, and contains fewer lines of code.
The second benefit: we can extract classes from one program and add them to another -- and do it easily. This is a big win. Often (too often), extracting a class from one program is difficult, because of dependencies and side effects. The one class requires other classes, not just direct dependencies but classes "to the side" and "above" in order to function. In the end, one must import most of the original system!
With immutable objects, we have eliminated side effects. Our code has no "side" or "above" dependencies, and has fewer direct dependencies. Thus, it is much easier for us to move a class from one program into another.
We took advantage of both of these effects this week, re-organizing our code. We were productive because our code used immutable objects.
Using just the "immutable object" technique, we've improved our code and made our programming lives easier. Immutable objects have given us two benefits this week.
The first benefit: less code. We revised our test framework to use immutable objects. Rather than instantiating a test object (which exercises the true object under test) and asking it to run the tests, we now instantiate the test object and it runs the tests immediately. We then simply ask it for the results. Our new code is simpler than before, and contains fewer lines of code.
The second benefit: we can extract classes from one program and add them to another -- and do it easily. This is a big win. Often (too often), extracting a class from one program is difficult, because of dependencies and side effects. The one class requires other classes, not just direct dependencies but classes "to the side" and "above" in order to function. In the end, one must import most of the original system!
With immutable objects, we have eliminated side effects. Our code has no "side" or "above" dependencies, and has fewer direct dependencies. Thus, it is much easier for us to move a class from one program into another.
We took advantage of both of these effects this week, re-organizing our code. We were productive because our code used immutable objects.
Wednesday, October 19, 2011
Engineering vs. craft
Some folks consider the development of software to be a craft; others claim that it is engineering.
As much as I would like for software development to be engineering, I consider it a craft.
Engineering is a craft that must work within measurable constraints, and must optimize some measurable attributes. For example, bridges must support a specific, measurable load, and minimize the materials used in construction (again, measurable quantities).
We do not do this for software.
We manage not software but software development. That is, we measure the cost and time of the development effort, but we do not measure the software itself. (The one exception is measuring the quality of the software, but that is a difficult measurement and we usually measure the number and severity of defects, which is a negative measure.)
If we are to engineer software, then we must measure the software. (We can -- and should -- measure the development effort. Those are necessary measurements. But they are not, by themselves, sufficient for engineering.)
What can we measure in software? Here are some suggestions:
- Lines of code
- Number of classes
- Number of methods
- Average size of classes
- Complexity (cyclomatic, McCabe, or whatever metric you like)
- "Boolean complexity" (the number of boolean constants used within code that are not part of initialization)
- The fraction of classes that are immutable
Some might find the notion of measuring lines of code abhorrent. I will argue that it is not the metric that is evil, it is the use of it to rank and rate programmers. The misuse of metrics is all too easy and can lead to poor code. (You get what you measure and reward.)
Why do we not measure these things? (Or any other aspect of code?)
Probably because there is no way to connect these metrics to project cost. In the end, project cost is what matters. Without a translation from lines of code (or any other metric) to cost, the metrics are meaningless. The code may be one class of ten thousand lines, or one hundred classes of one hundred lines each; without a conversion factor, the cost of each design is the same. (And the cost of each design is effectively zero, since we cannot convert design decisions into costs.)
Our current capabilities do not allow us to assign cost to design, or code size, or code complexity. The only costs we can measure are the development costs: number of programmers, time for testing, and number of defects.
One day in the future we will be able to convert complexity to cost. When we do, we will move from craft to engineering.
As much as I would like for software development to be engineering, I consider it a craft.
Engineering is a craft that must work within measurable constraints, and must optimize some measurable attributes. For example, bridges must support a specific, measurable load, and minimize the materials used in construction (again, measurable quantities).
We do not do this for software.
We manage not software but software development. That is, we measure the cost and time of the development effort, but we do not measure the software itself. (The one exception is measuring the quality of the software, but that is a difficult measurement and we usually measure the number and severity of defects, which is a negative measure.)
If we are to engineer software, then we must measure the software. (We can -- and should -- measure the development effort. Those are necessary measurements. But they are not, by themselves, sufficient for engineering.)
What can we measure in software? Here are some suggestions:
- Lines of code
- Number of classes
- Number of methods
- Average size of classes
- Complexity (cyclomatic, McCabe, or whatever metric you like)
- "Boolean complexity" (the number of boolean constants used within code that are not part of initialization)
- The fraction of classes that are immutable
Some might find the notion of measuring lines of code abhorrent. I will argue that it is not the metric that is evil, it is the use of it to rank and rate programmers. The misuse of metrics is all too easy and can lead to poor code. (You get what you measure and reward.)
Why do we not measure these things? (Or any other aspect of code?)
Probably because there is no way to connect these metrics to project cost. In the end, project cost is what matters. Without a translation from lines of code (or any other metric) to cost, the metrics are meaningless. The code may be one class of ten thousand lines, or one hundred classes of one hundred lines each; without a conversion factor, the cost of each design is the same. (And the cost of each design is effectively zero, since we cannot convert design decisions into costs.)
Our current capabilities do not allow us to assign cost to design, or code size, or code complexity. The only costs we can measure are the development costs: number of programmers, time for testing, and number of defects.
One day in the future we will be able to convert complexity to cost. When we do, we will move from craft to engineering.
Labels:
code complexity,
craft,
engineering,
project management
Tuesday, October 11, 2011
SOA is not DOA
SOA (service oriented architecture) is not dead. It is alive and well.
Mobile apps use it. iPhone apps that get data from a server (e-mail or Twitter, for example) use web services -- a service oriented architecture.
SOA was the big thing back in 2006. So why do we not hear about it today?
I suspect it had nothing to do with SOA's marketability.
I suspect that no one talks about SOA because no one makes money from it.
Object oriented programming was an opportunity to make money. Programmers had to learn new techniques and new languages; tool vendors had to provide new compilers, debuggers, and IDEs.
Java was a new programming language. Programmers had to learn it. Vendors provided new compilers and IDEs.
UML was big, for a while. Vendors provided tools; architects, programmers, and analysts learned it.
The "retraining load" for SOA is smaller, limited mostly to the architects of systems. (And there are far fewer architects than programmers or analysts.) SOA has no direct affect on programmers.
With no large-scale training programs for SOA (and no large-scale training budgets for SOA), vendors had no incentive to advertise it. They were better off hawking new versions of compilers.
Thus, SOA quietly faded into the background.
But it's not dead.
Mobile apps use SOA to get work done. iPhones and Android phones talk to servers, using web services. This design is SOA. We may not call it that, but that's what it is.
When the hype of SOA vanished, lots of companies dropped interest in SOA. Now, to move their applications to the mobile world, they will have to learn SOA.
So don't count SOA among the dead.
On the other hand, don't count on it for your profits. You need it, but it is infrastructure, like electricity and running water. I know of few companies that count on those utilities as a competitive advantage.
Mobile apps use it. iPhone apps that get data from a server (e-mail or Twitter, for example) use web services -- a service oriented architecture.
SOA was the big thing back in 2006. So why do we not hear about it today?
I suspect it had nothing to do with SOA's marketability.
I suspect that no one talks about SOA because no one makes money from it.
Object oriented programming was an opportunity to make money. Programmers had to learn new techniques and new languages; tool vendors had to provide new compilers, debuggers, and IDEs.
Java was a new programming language. Programmers had to learn it. Vendors provided new compilers and IDEs.
UML was big, for a while. Vendors provided tools; architects, programmers, and analysts learned it.
The "retraining load" for SOA is smaller, limited mostly to the architects of systems. (And there are far fewer architects than programmers or analysts.) SOA has no direct affect on programmers.
With no large-scale training programs for SOA (and no large-scale training budgets for SOA), vendors had no incentive to advertise it. They were better off hawking new versions of compilers.
Thus, SOA quietly faded into the background.
But it's not dead.
Mobile apps use SOA to get work done. iPhones and Android phones talk to servers, using web services. This design is SOA. We may not call it that, but that's what it is.
When the hype of SOA vanished, lots of companies dropped interest in SOA. Now, to move their applications to the mobile world, they will have to learn SOA.
So don't count SOA among the dead.
On the other hand, don't count on it for your profits. You need it, but it is infrastructure, like electricity and running water. I know of few companies that count on those utilities as a competitive advantage.
Labels:
architecture,
service oriented architecture,
SOA,
web services
Monday, October 10, 2011
Talent is not a commodity
Some companies treat their staff as a commodity. You know the symptoms: rigid job titles, detail (although often inaccurate) job descriptions, and a bureaucracy for hiring people. The underlying idea is that people, like any other commodity, can be hired for specific tasks at specific times. The management-speak for this idea is "just in time provisioning of staff".
Unfortunately for the managers, talented individuals are not stocked on shelves. They must be found and recruited. While companies (hiring companies and staffing companies) have built an infrastructure of resumes and keyword searches, the selection of candidates is lengthy and unpredictable. Hiring a good programmer is different from ordering a box of paper.
The "talent is a commodity" mindset leads to the "exact match" mindset. The "exact match" mindset leads hiring managers (and Human Resource managers) to the conclusion that the only person for the job is the "right fit" with the "complete set of skills". It is an approach that avoids mistakes, turning away candidates for the smallest of reasons. ("We listed eight skills for this position, and you have only seven. Sorry, you're not the person for us!")
Biasing your hiring decisions against mistakes means that you lose out on opportunities. It also means that you delay bringing a person on board. You can wait until you find a person with the exact right skills. Depending on the package (and it's popularity), it may take some time before you find the person.
I once had a recruiter from half-way across the country call me, because my resume listed the package GraphViz. GraphViz generates and manipulates network graphs, and while used by lots of people, it is rarely listed on resumes. Therefore, recruiters cannot find people with the exact match to the desired skills -- the keyword match fails.
Of course, when you bring this person on board, you are under a tight schedule. You need the person to perform immediately. They do their best, but even that may be insufficient to learn the technologies and your current system. (Not to mention the corporate culture.) The approach has a high risk of mistakes (low quality deliverable), slow performance (again, a low quality deliverable), cost overruns from overtime (high expenses), and possibly a late delivery.
Let's consider an alternative sequence.
Instead of looking for an exact match, you find a bright programmer who has the ability to learn the specialized skill. Pay that person for a week (or three) to learn the package. Then have them start on integrating the package into your system.
You should be able to find someone in a few weeks, much less than the six months or more for the exact match. (If you cannot find a bright programmer in a week or two, you have other problems.)
Compromising on specific skills (while keeping excellence in general skills) provides some advantages.
You start earlier, which means you can identify problems earlier.
Your costs may be slightly higher, since you're paying for more time. On the other hand, you may be able to find a person at a lower rate. And even at the higher rate, a few months over a long term of employment is not that significant.
You invest in the person (by paying him to learn something new), and the person will recognize that. (You're hiring a clever person, remember?)
You can consider talent as an "off-the-shelf" commodity, something that can be hired on demand. For commonly used skills, this is a workable model. But for obscure skills, or a long list of skills, the model works poorly. Good managers know how and when to compromise on small objectives to meet the larger goals.
Unfortunately for the managers, talented individuals are not stocked on shelves. They must be found and recruited. While companies (hiring companies and staffing companies) have built an infrastructure of resumes and keyword searches, the selection of candidates is lengthy and unpredictable. Hiring a good programmer is different from ordering a box of paper.
The "talent is a commodity" mindset leads to the "exact match" mindset. The "exact match" mindset leads hiring managers (and Human Resource managers) to the conclusion that the only person for the job is the "right fit" with the "complete set of skills". It is an approach that avoids mistakes, turning away candidates for the smallest of reasons. ("We listed eight skills for this position, and you have only seven. Sorry, you're not the person for us!")
Biasing your hiring decisions against mistakes means that you lose out on opportunities. It also means that you delay bringing a person on board. You can wait until you find a person with the exact right skills. Depending on the package (and it's popularity), it may take some time before you find the person.
It might take you six months -- or longer -- to find an exact match. And you may never find an exact match. Instead, with a deadline looming, you compromise on a candidate that has skills that are "close enough".
Of course, when you bring this person on board, you are under a tight schedule. You need the person to perform immediately. They do their best, but even that may be insufficient to learn the technologies and your current system. (Not to mention the corporate culture.) The approach has a high risk of mistakes (low quality deliverable), slow performance (again, a low quality deliverable), cost overruns from overtime (high expenses), and possibly a late delivery.
Let's consider an alternative sequence.
Instead of looking for an exact match, you find a bright programmer who has the ability to learn the specialized skill. Pay that person for a week (or three) to learn the package. Then have them start on integrating the package into your system.
You should be able to find someone in a few weeks, much less than the six months or more for the exact match. (If you cannot find a bright programmer in a week or two, you have other problems.)
Compromising on specific skills (while keeping excellence in general skills) provides some advantages.
You start earlier, which means you can identify problems earlier.
Your costs may be slightly higher, since you're paying for more time. On the other hand, you may be able to find a person at a lower rate. And even at the higher rate, a few months over a long term of employment is not that significant.
You invest in the person (by paying him to learn something new), and the person will recognize that. (You're hiring a clever person, remember?)
You can consider talent as an "off-the-shelf" commodity, something that can be hired on demand. For commonly used skills, this is a workable model. But for obscure skills, or a long list of skills, the model works poorly. Good managers know how and when to compromise on small objectives to meet the larger goals.
Saturday, October 8, 2011
What Microsoft's past can tell us about Windows 8
Microsoft Windows 8 changes a lot of assumptions about Windows. It especially affects developers. The familiar Windows API has been deprecated, and Microsoft now offers WinRT (the "Windows Runtime").
What will it be like? What will it offer?
I have a guess.
This is a guess. As such, I could be right or wrong. I have seen none of Microsoft's announcements or documentation for Windows 8, so I might be wrong at this very moment.
Microsoft is good at building better versions of competitors' products.
Let's look at Microsoft products and see how they compare to the competition.
MS-DOS was a bigger, better CP/M.
Windows was a better (although perhaps not bigger) version of IBM's OS/2 Presentation Manager.
Windows 3.1 included a better version of Novell's Netware.
Word was a bigger version of Wordstar and Wordperfect.
Excel was a bigger, better version of Lotus 1-2-3.
Visual Studio was a bigger, better version of Borland's TurboPascal IDE.
C# was a better version of Java.
Microsoft is not so much an innovator as it is an "improver", one who refines an idea.
What will it be like? What will it offer?
I have a guess.
This is a guess. As such, I could be right or wrong. I have seen none of Microsoft's announcements or documentation for Windows 8, so I might be wrong at this very moment.
Microsoft is good at building better versions of competitors' products.
Let's look at Microsoft products and see how they compare to the competition.
MS-DOS was a bigger, better CP/M.
Windows was a better (although perhaps not bigger) version of IBM's OS/2 Presentation Manager.
Windows 3.1 included a better version of Novell's Netware.
Word was a bigger version of Wordstar and Wordperfect.
Excel was a bigger, better version of Lotus 1-2-3.
Visual Studio was a bigger, better version of Borland's TurboPascal IDE.
C# was a better version of Java.
Microsoft is not so much an innovator as it is an "improver", one who refines an idea.
It might just be that Windows 8 will be not an Innovative New Thing, but instead a Bigger Better Version of Some Existing Thing -- and not a bigger, better version of Windows 7, but a bigger, better version of someone else's operating system.
That operating system may just be Unix, or Linux, or NetBSD.
Microsoft can't simply take the code to Linux and "improve" it into WinRT; doing so would violate the Linux license.
But Microsoft has an agreement with Novell (yes, the same Novell that saw it's Netware product killed by Windows 3.1), and Novell has the copyright to Unix. That may give Microsoft a way to use Unix code.
It just may be that Microsoft's WinRT will be very Unix-like, with a kernel and a separate graphics layer, modules and drivers, and an efficient set of system calls. WinRT may be nothing more than a bigger, better version of Unix.
And that may be a good thing.
Tuesday, October 4, 2011
What have you done for you lately?
The cynical question that one asks of another is "What have you done for me lately?".
A better question to ask of oneself is: "What have I done for me lately?".
We should each be learning new things: new technologies, new languages, new business skills... something.
Companies provide employees with performance reviews (or assessments, or evaluations, or some such thing). One item (often given a low weighting factor) is "training". (Personally, I think it should be considered "education"... but that is another issue.)
I like to give myself an assessment each year, and look at education. I expect to learn something new each year.
I start each year with a list of cool stuff that sounds interesting. The items could be new programming languages, or a different technologies, or interpersonal skills. I refer to that list during the year; sometimes I add or change things. (I don't hold myself to the original list -- technology changes to quickly.)
Employers and companies all too often take little action to help their employees improve. That doesn't mean that you get a free pass -- it means that you must be proactive. Don't wait for someone to tell you to learn a new skill; by then it will be too late. Look around, pick some skills, and start learning.
What are you doing for you?
Sunday, October 2, 2011
The end of the PC age?
Are we approaching the end of the PC age? It seems odd to see the end of the age, as I was there at the beginning. The idea that a technology should have a shorter lifespan than a human leads one to various contemplations.
But perhaps the idea is not so strange. Other technologies have come and gone: videotape recorders, hand-held calculators, Firewire, and the space shuttle come to mind. (And by "gone", I mean "used in limited quantities, if at all". The space shuttles are gone; VCRs and calculators are still in use but considered curiosities.
Personal computers are still around, of course. People use them in the office and at home. They are entrenched in the office, and I think that they will remain present for at least a decade. Home use, in contrast, will decline quickly, with personal computers replaced by game consoles, cell phones, and tablets. Computing will remain in the office and in the home.
But here's the thing: People do not think of cell phones and tablets as personal computers.
Cell phones and tablets are cool computing devices, but they are not "personal computers". Even Macbooks and iMac computers are not "personal computers". The term "PC" was strongly associated with IBM (with "clone" for other brands) and Microsoft DOS (and later, Windows).
People have come to associate the term "personal computer" with a desktop or laptop computer of a certain size and weight, of any brand, running Microsoft Windows. Computing devices in other forms, or running other operating systems, are not "personal computers": they are something else: a Macbook, a cell phone, an iPad... something. But not a PC.
Microsoft's Windows 8 offers a very different experience from the "classic Windows". I believe that this difference is enough to break the idea of a "personal computer". That is, a tablet running Windows 8 will be considered a "tablet" and not a "PC". New desktop computers with touchscreens will be considered computers, but probably not "PCs". Only the older computers with keyboards and mice (and no touchscreen) will be considered "personal computers".
Microsoft has the opportunity to brand these new touchscreen computers. I suggest that they take advantage of this opportunity. I recognize that their track record with product names has been poor ("Zune", "Kin", and the ever-awful "Bob") but they must do something.
The term "personal computer" is becoming a reference to a legacy device, to our father's computing equipment. Personal computers were once the Cool New Thing, but no more.
But perhaps the idea is not so strange. Other technologies have come and gone: videotape recorders, hand-held calculators, Firewire, and the space shuttle come to mind. (And by "gone", I mean "used in limited quantities, if at all". The space shuttles are gone; VCRs and calculators are still in use but considered curiosities.
Personal computers are still around, of course. People use them in the office and at home. They are entrenched in the office, and I think that they will remain present for at least a decade. Home use, in contrast, will decline quickly, with personal computers replaced by game consoles, cell phones, and tablets. Computing will remain in the office and in the home.
But here's the thing: People do not think of cell phones and tablets as personal computers.
Cell phones and tablets are cool computing devices, but they are not "personal computers". Even Macbooks and iMac computers are not "personal computers". The term "PC" was strongly associated with IBM (with "clone" for other brands) and Microsoft DOS (and later, Windows).
People have come to associate the term "personal computer" with a desktop or laptop computer of a certain size and weight, of any brand, running Microsoft Windows. Computing devices in other forms, or running other operating systems, are not "personal computers": they are something else: a Macbook, a cell phone, an iPad... something. But not a PC.
Microsoft's Windows 8 offers a very different experience from the "classic Windows". I believe that this difference is enough to break the idea of a "personal computer". That is, a tablet running Windows 8 will be considered a "tablet" and not a "PC". New desktop computers with touchscreens will be considered computers, but probably not "PCs". Only the older computers with keyboards and mice (and no touchscreen) will be considered "personal computers".
Microsoft has the opportunity to brand these new touchscreen computers. I suggest that they take advantage of this opportunity. I recognize that their track record with product names has been poor ("Zune", "Kin", and the ever-awful "Bob") but they must do something.
The term "personal computer" is becoming a reference to a legacy device, to our father's computing equipment. Personal computers were once the Cool New Thing, but no more.
Subscribe to:
Posts (Atom)