When programming, it is best to think like a computer. It is tempting to think like a human. But humans think very differently than computers (if we allow that computers think), and thinking like a human leads to complex programs.
This was brought home to me while reading William Conley's "Computer Optimization Techniques" which discusses the solutions to Integer Programming problems and related problems. Many of these problems can be solved with brute-force calculations, evaluating every possible solution and identifying the most profitable (or least expensive).
The programs for these brute-force methods are short and simple. Even in FORTRAN, they run less than fifty lines. Their brevity is due to their simplicity. There is no clever coding, no attempt to optimize the algorithm. The programs take advantage of the computer's strength of fast computation.
Humans think very differently. They tire quickly of routine calculations. They can identify patterns and have insights into shortcuts for algorithms. They can take creative leaps to solutions. These are all survival skills, useful for dealing with an uncertain environment and capable predators. But they are quite difficult to encode into a computer program. So hard that it is often more efficient to use brute-force calculations without insights and creative leaps. The time spent making the program "smart" is larger than the time saved by the improved program.
Brute-force is not always the best method for calculations. Sometimes you need a smart program, because the number of computations is staggering. In those cases, it is better to invest the time in improvements. (To his credit, Conley shows techniques to reduce the computations, sometimes by increasing the complexity of the code.)
Computing efficiency (that is, "smart" programs) has been a concern since the first computing machines were made. Necessary at first, the need for efficiency drops over time. Mainframe computers became faster, which allowed for "sloppy" programs ("sloppy" meaning "anything less than maximum efficiency").
Minicomputers were slower than mainframes, significantly less expensive, and another step away from the need for optimized, "smart" programs. PCs were another step. Today, smart phones have more computing power than PCs of a few years ago, at a fraction of the price. Cloud computing, a separate branch in the evolution of computing, offers cheap, readily-available computing power.
I won't claim that computing power is (or will ever be) "too cheap to meter". But it is cheap, and it is plentiful. And with cheap and plentiful computing power, we can build programs that use simple methods.
When writing a computer program, think like a computer. Start with a simple algorithm, one that is not clever. Chances are, it will be good enough.
Showing posts with label PCs. Show all posts
Showing posts with label PCs. Show all posts
Tuesday, September 17, 2013
Thursday, August 1, 2013
A review of PCs from tablet perspective
I had the opportunity to try several of the PC-type tablets that are now on the market. Wow, they are very different from the standard tablet. Here is my review.
I tried a Lenovo ThinkPad Edge "laptop", a Dell Inspiron "desktop", and an Apple MacBook "laptop". For the ThinkPad and Inspiron, I used Microsoft Windows 7 and Ubuntu Linux. The MacBook ran Apple's MacOS.
I'm not sure where to begin with my review. The "PC experience" is quite different from the normal experience of a tablet.
The first difference is the size. PCs come in two basic styles: desktop and laptop. The laptop is similar to a tablet with a Bluetooth keyboard, except somewhat heavier. The keyboard is physically attached, which I found odd. I initially thought this was for protection (the keyboard is hinged and folds over the screen) and convenience in travelling (you won't lose the keyboard) but later I found that the real reason was quite different - various hardware is built under the keyboard, and wires connect the central circuitry to the screen.
The laptop flavor of a PC should really be called "desktop", since the only practical way to use it is on a desktop. One cannot separate the keyboard, and balancing the keyboard and screen on your lap is awkward at best. Both the Lenovo and the Apple PCs used this design.
The desktop version (the Dell Inspiron, but a quick survey shows that all brands use this design) is also incorrectly named. The screen is large -- too large to be portable. It requires its own stand, which places the screen at a comfortable viewing angle. (Most PC screens allow the use to adjust the height and viewing angle.) The desktop PC also includes a keyboard, one that is connected to the unit with a cable.
The name "desktop" is wrong because in addition to the screen and keyboard there is also a separate, large box that must be attached to the screen. (The keyboard connects to this box, not the screen.) This large box belongs not on a desktop but on the floor, and the users I talked with indicated that they all stored this box on the floor.
The desktop version of the PC is not portable. The combination of the large screen, separate keyboard, and large "processor" box are too cumbersome to carry. In addition, the screen and processor box both require power from 120VAC, and neither have the capability for battery operation.
The laptop versions of the PC are somewhat portable. They can fold for carrying, and they have battery for some use. (The manufacturers claim six to eight hours; users I spoke with indicated three to five hours. My tests fell in line with users.)
The next big differences one notices are the screen, keyboard, and touch interface. The screen is large, with ample real estate for displaying apps. The keyboards are physical keyboards, not on-screen keyboards (in fact there is no support for on-screen keyboards). Physical keyboards took some getting used to, since the keys do travel and provide excellent tactile feedback. But being physical, they cannot change to reflect different modes or languages, with the result being more keys to handle special symbols and indicators to show "caps" mode. (There were some keys with unusual names such as "Print Screen", "Scroll Lock", and "Pause", but I found no use for them. Perhaps they are for future expansions?)
Another noticeable difference is that the screen does not support touch. This was frustrating, as I kept touching the screen and waiting for something to happen. After a few seconds, I realized that I had to use the keyboard or a touchpad (or mouse -- more on that later).
The Lenovo and Apple laptops came with built-in touchpads. These are small (3" by 4") pads below the keyboard that let you control a small "cursor" on the screen. The cursor is normally shaped as an arrow pointing in the north-by-northwest direction (some modes change this shape) and you can move the cursor by touching and swiping on the touchpad. Since the touchpad is relatively far from the screen, this design requires the ability to touch the pad while you look at the screen -- something that I suspect few people will want to learn.
The desktops did not use a touchpad, but instead had an extra device called a "mouse". (Where did they get that names?) It is a small, roughly half-sphere, object that one drags on a flat surface. It too, controls a "cursor" on the screen, and it was harder to use than the touchpad! Proper use requires looking at the screen and holding the mouse off to the side, again using coordinated actions without looking at one of your hands. I found that my desk at home was a bit small for such a computer; I kept dragging the mouse off the edge of the desk.
The PC is not a complete disaster. All units I evaluated had a cable for internet access. I had to physically connect the units to my home router (finally understanding why it had those "extra" ports) and network access was fast and consistent. The Apple and Lenovo PCs (the laptops) also supported the standard wi-fi connections.
PCs have enormous memory, and apps can take advantage of it. My evaluation units all came with 4GB of RAM, which is small for PCs. This leads to apps that are much larger and more complex. More on apps later. RAM is temporary storage and not the usual memory we think of in tablets.
PCs also have enormous storage. which is the equivalent of a tablet's normal memory. My evaluation units came with 300GB to 500GB! The sheer amount boggles me. (Although to be honest, I'm not sure why one needs so much storage. With the fast and reliable network connection, one could easily push data to servers, without using local storage.)
A few more things about hardware, before I move on to operating systems and apps: PCs have lots of ports for accessory devices. Perhaps this is a result of their size; they can afford the space for circuitry and jacks. The PC seems designed for external hardware; the keyboard and "mouse" must be connected through these ports.
The laptop units had built-in forward-facing cameras, the desktop PCs had no cameras. Desktop PCs can have cameras as an extra device (using one of the ports).
None of the units had accelerometers, compasses, or GPS antennae. For the "desktop" units, that makes sense as they are made to be stationary. I'm not sure why they were omitted from the "laptop" PCs which theoretically could move, and certainly have the space for them.
I tried three operating systems: Microsoft Windows, Apple MacOS, and Ubuntu Linux. All are quite similar, and all are significantly different from the typical tablet operating system.
The Lenovo and Dell computers came with Windows 7 pre-installed. The Apple came with Apple MacOS pre-installed. I installed Linux on the Lenovo and Dell, using a technique called "partitioning". This technique lets you allocate the PCs storage between the two operating systems. (With 300GB of storage, there is a lot to go around.)
A "partitioned" system presents a menu when started, letting you select which operating system you want. The menu has a twenty-second timeout, starting Ubuntu if you take no action. (I think that this is configurable.)
All three PC operating systems use a desktop metaphor. The main screen contains icons for apps, and you start an app not by touching the icon (remember, the screen doesn't support touch!) but by dragging the mouse cursor to an icon and double-clicking on it.
With the large screen, apps don't fill the entire screen but take only a portion of it. The app displays a "window" (a term used by all three operating systems, not just Microsoft Windows) and you can run several apps at the same time. This is a nice feature of PCs, as you can see the status of multiple apps at the same time. (Although too many apps at once can be overwhelming.)
The smaller-than-screen size of apps also lets you move app windows on your "desktop". A complicated sequence of moving the mouse, pressing and long-holding a button, moving the mouse while long-holding, and then releasing the button lets you move windows on the screen. This lets you arrange apps you your liking and move important apps to prominent locations.
The different operating systems had different ideas about app purchases. Linux has a store for selecting and purchasing apps, much like a typical tablet. Apple MacOS has an "App Store" but many apps are not available though it and must be purchased separately. For Microsoft's Windows 7, all apps must be purchased separately. I found the Linux arrangement the most friendly, since there is one place to go for apps. [Edit: I later learned that in Linux you can also download apps from other sources.]
The lack of a central store for apps leads to another difference: updates. Without the central store to coordinate versions of apps, each app must check for its own updates. I can't imagine why anyone would want to distribute software without the infrastructure of an app store; doing so requires duplicating code to check versions, download updates, and apply updates in every app! It seems to put a large burden on the app development team (and the testing team).
All three operating systems handled updates for themselves. Windows, MacOS, and Linux all automatically found, downloaded, and applied updates. Ubuntu Linux, with its store, considered the OS update to be "just another update" and bundled it into a list with app updates. Windows and MacOS handled OS updates and did nothing for apps. (I suspect the MacOS app store would handle updates for apps, but I had none during my evaluation.)
PC apps tend to focus on office work, and given the hardware, this is no surprise. The physical keyboard excels at text entry, and the lack of geolocation services removes a number of apps from the PC's repertoire. An app such as FourSquare is not possible without location services, and Facebook is limited without a camera.
In conclusion, I find the idea of the PC misguided: its powerful hardware is torn between local applications (processor and storage) and normal service-based apps (reliable and fast network). The absence of touch support for the screen and the physical keyboard pushes one to text-oriented data, and the clumsy touchpad (or even worse, mouse) pushes one away from UI operations. Forcing users to hunt down apps without a central store places a burden on the users. Forcing apps to update themselves places a burden on developers.
I tried a Lenovo ThinkPad Edge "laptop", a Dell Inspiron "desktop", and an Apple MacBook "laptop". For the ThinkPad and Inspiron, I used Microsoft Windows 7 and Ubuntu Linux. The MacBook ran Apple's MacOS.
I'm not sure where to begin with my review. The "PC experience" is quite different from the normal experience of a tablet.
The first difference is the size. PCs come in two basic styles: desktop and laptop. The laptop is similar to a tablet with a Bluetooth keyboard, except somewhat heavier. The keyboard is physically attached, which I found odd. I initially thought this was for protection (the keyboard is hinged and folds over the screen) and convenience in travelling (you won't lose the keyboard) but later I found that the real reason was quite different - various hardware is built under the keyboard, and wires connect the central circuitry to the screen.
The laptop flavor of a PC should really be called "desktop", since the only practical way to use it is on a desktop. One cannot separate the keyboard, and balancing the keyboard and screen on your lap is awkward at best. Both the Lenovo and the Apple PCs used this design.
The desktop version (the Dell Inspiron, but a quick survey shows that all brands use this design) is also incorrectly named. The screen is large -- too large to be portable. It requires its own stand, which places the screen at a comfortable viewing angle. (Most PC screens allow the use to adjust the height and viewing angle.) The desktop PC also includes a keyboard, one that is connected to the unit with a cable.
The name "desktop" is wrong because in addition to the screen and keyboard there is also a separate, large box that must be attached to the screen. (The keyboard connects to this box, not the screen.) This large box belongs not on a desktop but on the floor, and the users I talked with indicated that they all stored this box on the floor.
The desktop version of the PC is not portable. The combination of the large screen, separate keyboard, and large "processor" box are too cumbersome to carry. In addition, the screen and processor box both require power from 120VAC, and neither have the capability for battery operation.
The laptop versions of the PC are somewhat portable. They can fold for carrying, and they have battery for some use. (The manufacturers claim six to eight hours; users I spoke with indicated three to five hours. My tests fell in line with users.)
The next big differences one notices are the screen, keyboard, and touch interface. The screen is large, with ample real estate for displaying apps. The keyboards are physical keyboards, not on-screen keyboards (in fact there is no support for on-screen keyboards). Physical keyboards took some getting used to, since the keys do travel and provide excellent tactile feedback. But being physical, they cannot change to reflect different modes or languages, with the result being more keys to handle special symbols and indicators to show "caps" mode. (There were some keys with unusual names such as "Print Screen", "Scroll Lock", and "Pause", but I found no use for them. Perhaps they are for future expansions?)
Another noticeable difference is that the screen does not support touch. This was frustrating, as I kept touching the screen and waiting for something to happen. After a few seconds, I realized that I had to use the keyboard or a touchpad (or mouse -- more on that later).
The Lenovo and Apple laptops came with built-in touchpads. These are small (3" by 4") pads below the keyboard that let you control a small "cursor" on the screen. The cursor is normally shaped as an arrow pointing in the north-by-northwest direction (some modes change this shape) and you can move the cursor by touching and swiping on the touchpad. Since the touchpad is relatively far from the screen, this design requires the ability to touch the pad while you look at the screen -- something that I suspect few people will want to learn.
The desktops did not use a touchpad, but instead had an extra device called a "mouse". (Where did they get that names?) It is a small, roughly half-sphere, object that one drags on a flat surface. It too, controls a "cursor" on the screen, and it was harder to use than the touchpad! Proper use requires looking at the screen and holding the mouse off to the side, again using coordinated actions without looking at one of your hands. I found that my desk at home was a bit small for such a computer; I kept dragging the mouse off the edge of the desk.
The PC is not a complete disaster. All units I evaluated had a cable for internet access. I had to physically connect the units to my home router (finally understanding why it had those "extra" ports) and network access was fast and consistent. The Apple and Lenovo PCs (the laptops) also supported the standard wi-fi connections.
PCs have enormous memory, and apps can take advantage of it. My evaluation units all came with 4GB of RAM, which is small for PCs. This leads to apps that are much larger and more complex. More on apps later. RAM is temporary storage and not the usual memory we think of in tablets.
PCs also have enormous storage. which is the equivalent of a tablet's normal memory. My evaluation units came with 300GB to 500GB! The sheer amount boggles me. (Although to be honest, I'm not sure why one needs so much storage. With the fast and reliable network connection, one could easily push data to servers, without using local storage.)
A few more things about hardware, before I move on to operating systems and apps: PCs have lots of ports for accessory devices. Perhaps this is a result of their size; they can afford the space for circuitry and jacks. The PC seems designed for external hardware; the keyboard and "mouse" must be connected through these ports.
The laptop units had built-in forward-facing cameras, the desktop PCs had no cameras. Desktop PCs can have cameras as an extra device (using one of the ports).
None of the units had accelerometers, compasses, or GPS antennae. For the "desktop" units, that makes sense as they are made to be stationary. I'm not sure why they were omitted from the "laptop" PCs which theoretically could move, and certainly have the space for them.
I tried three operating systems: Microsoft Windows, Apple MacOS, and Ubuntu Linux. All are quite similar, and all are significantly different from the typical tablet operating system.
The Lenovo and Dell computers came with Windows 7 pre-installed. The Apple came with Apple MacOS pre-installed. I installed Linux on the Lenovo and Dell, using a technique called "partitioning". This technique lets you allocate the PCs storage between the two operating systems. (With 300GB of storage, there is a lot to go around.)
A "partitioned" system presents a menu when started, letting you select which operating system you want. The menu has a twenty-second timeout, starting Ubuntu if you take no action. (I think that this is configurable.)
All three PC operating systems use a desktop metaphor. The main screen contains icons for apps, and you start an app not by touching the icon (remember, the screen doesn't support touch!) but by dragging the mouse cursor to an icon and double-clicking on it.
With the large screen, apps don't fill the entire screen but take only a portion of it. The app displays a "window" (a term used by all three operating systems, not just Microsoft Windows) and you can run several apps at the same time. This is a nice feature of PCs, as you can see the status of multiple apps at the same time. (Although too many apps at once can be overwhelming.)
The smaller-than-screen size of apps also lets you move app windows on your "desktop". A complicated sequence of moving the mouse, pressing and long-holding a button, moving the mouse while long-holding, and then releasing the button lets you move windows on the screen. This lets you arrange apps you your liking and move important apps to prominent locations.
The different operating systems had different ideas about app purchases. Linux has a store for selecting and purchasing apps, much like a typical tablet. Apple MacOS has an "App Store" but many apps are not available though it and must be purchased separately. For Microsoft's Windows 7, all apps must be purchased separately. I found the Linux arrangement the most friendly, since there is one place to go for apps. [Edit: I later learned that in Linux you can also download apps from other sources.]
The lack of a central store for apps leads to another difference: updates. Without the central store to coordinate versions of apps, each app must check for its own updates. I can't imagine why anyone would want to distribute software without the infrastructure of an app store; doing so requires duplicating code to check versions, download updates, and apply updates in every app! It seems to put a large burden on the app development team (and the testing team).
All three operating systems handled updates for themselves. Windows, MacOS, and Linux all automatically found, downloaded, and applied updates. Ubuntu Linux, with its store, considered the OS update to be "just another update" and bundled it into a list with app updates. Windows and MacOS handled OS updates and did nothing for apps. (I suspect the MacOS app store would handle updates for apps, but I had none during my evaluation.)
PC apps tend to focus on office work, and given the hardware, this is no surprise. The physical keyboard excels at text entry, and the lack of geolocation services removes a number of apps from the PC's repertoire. An app such as FourSquare is not possible without location services, and Facebook is limited without a camera.
In conclusion, I find the idea of the PC misguided: its powerful hardware is torn between local applications (processor and storage) and normal service-based apps (reliable and fast network). The absence of touch support for the screen and the physical keyboard pushes one to text-oriented data, and the clumsy touchpad (or even worse, mouse) pushes one away from UI operations. Forcing users to hunt down apps without a central store places a burden on the users. Forcing apps to update themselves places a burden on developers.
Tuesday, July 23, 2013
The killer app for Microsoft Surface is collaboration
People brought PCs into the office because PCs let people become more effective. The early days were difficult, as we struggled with them. We didn't know how to use PCs well, and software was difficult to use.
Eventually, we found the right mix of hardware and software. Windows XP was powerful enough to be useful for corporations and individuals, and it was successful. (And still is.)
Now, people are struggling with tablets. We don't know how to use them well -- especially in business. But our transition from PC to tablet will be more difficult than the transition from typewriter to PC.
Apple and Google built a new experience, one oriented for consumers, into the iPad and Android tablet. They left the desktop experience behind and started fresh.
Microsoft, in targeting the commercial market, delivered word processing and spreadsheets. But the tablet versions of Word and Excel are poor cousins to their desktop versions. Microsoft has an uphill battle to convince people to switch -- even for short periods -- from the desktop to the tablet for word processing and spreadsheets.
In short, Apple and Google have green fields, and Microsoft is competing with its own applications. For the tablet, Microsoft has to go beyond the desktop experience. Word processing and spreadsheets are not enough; it has to deliver something more. It needs a "killer app", a compelling use for tablets.
I have a few ideas for compelling office applications:
The shift is a one from individual work to collaborative work. Develop apps to help not individuals but teams become more effective.
If Microsoft can let people use tablets to work with other people, they will have something.
Eventually, we found the right mix of hardware and software. Windows XP was powerful enough to be useful for corporations and individuals, and it was successful. (And still is.)
Now, people are struggling with tablets. We don't know how to use them well -- especially in business. But our transition from PC to tablet will be more difficult than the transition from typewriter to PC.
Apple and Google built a new experience, one oriented for consumers, into the iPad and Android tablet. They left the desktop experience behind and started fresh.
Microsoft, in targeting the commercial market, delivered word processing and spreadsheets. But the tablet versions of Word and Excel are poor cousins to their desktop versions. Microsoft has an uphill battle to convince people to switch -- even for short periods -- from the desktop to the tablet for word processing and spreadsheets.
In short, Apple and Google have green fields, and Microsoft is competing with its own applications. For the tablet, Microsoft has to go beyond the desktop experience. Word processing and spreadsheets are not enough; it has to deliver something more. It needs a "killer app", a compelling use for tablets.
I have a few ideas for compelling office applications:
- calendars and scheduling
- conference calls and video calls
- presentations not just on projectors but device-to-device
- multi-author documents and spreadsheets
The shift is a one from individual work to collaborative work. Develop apps to help not individuals but teams become more effective.
If Microsoft can let people use tablets to work with other people, they will have something.
Thursday, May 2, 2013
Our fickleness on the important aspects of programs
Over time, we have changed our desire in program attributes. If we divide the IT age into four eras, we can see this change. Let's consider the four eras to be mainframe, PC, web, and mobile/cloud. These four eras used different technology and different languages, and praised different accomplishments.
In the mainframe era, we focussed on raw efficiency. We measured CPU usage, memory usage, and disk usage. We strove to have enough CPU, memory, and disk, with some to spare but not too much. Hardware was expensive, and too much spare capacity meant that you were paying for more than you needed.
In the PC era we focussed not on efficiency but on user-friendliness. We built applications with help screens and menus. We didn't care too much about efficiency -- many people left PCs powered on overnight, with no "jobs" running.
With web applications, we focussed on globalization, with efficiency as a sub-goal. The big effort was in the delivery of an application to a large quantity of users. This meant translation into multiple languages, the "internationalization" of an application, support for multiple browsers, and support for multiple time zones. But we didn't want to overload our servers, either, so early Perl CGI applications were quickly converted to C or other languages for performance.
With applications for mobile/cloud, we desire two aspects: For mobile apps (that is, the 'UI' portion), we want something easier than "user-friendly". The operation of an app must not merely be simple, it must be obvious. For cloud apps (that is, the server portion), we want scalability. An app must not be monolithic, but assembled from collaborative components.
The objectives for systems vary from era to era. Performance was a highly measured aspect in the mainframe era, and almost ignored in the PC era.
The shift from one era to another may be difficult for practitioners. Programmers in one era may be trained to "optimize" their code for the dominant aspect. (In the mainframe era, they would optimize for performance.) A succeeding era would demand other aspects in their systems, and programmers may not be aware of the change. Thus, a highly-praised mainframe programmer with excellent skills at algorithm design, when transferred to a PC project may find that his skills are not desired or recognized. His code may receive a poor review, since the expectation for PC systems is "user friendly" and his skills from mainframe programming do not provide that aspect.
Similarly, a skilled PC programmer may have difficulties when moving to web or mobile/cloud systems. The expectations for user interface, architecture, and efficiency are quite different.
Practitioners who start with a later era (for example, the 'young turks' starting with mobile/cloud) may find it difficult to comprehend the reasoning of programmers from an earlier era. Why do mainframe programmers care about the order of mathematical operations? Why do PC programmers care so much about in-memory data structures, to the point of writing their own?
The answers are that, at the time, these were important aspects of programs. They were pounded into the programmers of earlier eras, to a degree that those programmers design their code without thinking about these optimizations.
Experienced programmers must look at the new system designs and the context of those designs. Mobile/cloud needs scalability, and therefore needs collaborative components. The monolithic designs that optimized memory usage are unsuitable to the new environment. Experienced programmers must recognize their learned biases and discard those that are not useful in the new era. (Perhaps we can consider this a problem of cache invalidation.)
Younger programmers would benefit from a deeper understanding of the earlier eras. Art students learn study the conditions (and politics) of the old masters. Architects study the buildings of the Greeks, Romans, and medieval kingdoms. Programmers familiar with the latest era, and only the latest era, will have a difficult time communicating with programmers of earlier eras.
Each era has objectives and constraints. Learn about those objectives and constraints, and you will find a deeper appreciation of programs and a greater ability to communicate with other programmers.
In the mainframe era, we focussed on raw efficiency. We measured CPU usage, memory usage, and disk usage. We strove to have enough CPU, memory, and disk, with some to spare but not too much. Hardware was expensive, and too much spare capacity meant that you were paying for more than you needed.
With web applications, we focussed on globalization, with efficiency as a sub-goal. The big effort was in the delivery of an application to a large quantity of users. This meant translation into multiple languages, the "internationalization" of an application, support for multiple browsers, and support for multiple time zones. But we didn't want to overload our servers, either, so early Perl CGI applications were quickly converted to C or other languages for performance.
With applications for mobile/cloud, we desire two aspects: For mobile apps (that is, the 'UI' portion), we want something easier than "user-friendly". The operation of an app must not merely be simple, it must be obvious. For cloud apps (that is, the server portion), we want scalability. An app must not be monolithic, but assembled from collaborative components.
The objectives for systems vary from era to era. Performance was a highly measured aspect in the mainframe era, and almost ignored in the PC era.
The shift from one era to another may be difficult for practitioners. Programmers in one era may be trained to "optimize" their code for the dominant aspect. (In the mainframe era, they would optimize for performance.) A succeeding era would demand other aspects in their systems, and programmers may not be aware of the change. Thus, a highly-praised mainframe programmer with excellent skills at algorithm design, when transferred to a PC project may find that his skills are not desired or recognized. His code may receive a poor review, since the expectation for PC systems is "user friendly" and his skills from mainframe programming do not provide that aspect.
Similarly, a skilled PC programmer may have difficulties when moving to web or mobile/cloud systems. The expectations for user interface, architecture, and efficiency are quite different.
Practitioners who start with a later era (for example, the 'young turks' starting with mobile/cloud) may find it difficult to comprehend the reasoning of programmers from an earlier era. Why do mainframe programmers care about the order of mathematical operations? Why do PC programmers care so much about in-memory data structures, to the point of writing their own?
The answers are that, at the time, these were important aspects of programs. They were pounded into the programmers of earlier eras, to a degree that those programmers design their code without thinking about these optimizations.
Experienced programmers must look at the new system designs and the context of those designs. Mobile/cloud needs scalability, and therefore needs collaborative components. The monolithic designs that optimized memory usage are unsuitable to the new environment. Experienced programmers must recognize their learned biases and discard those that are not useful in the new era. (Perhaps we can consider this a problem of cache invalidation.)
Younger programmers would benefit from a deeper understanding of the earlier eras. Art students learn study the conditions (and politics) of the old masters. Architects study the buildings of the Greeks, Romans, and medieval kingdoms. Programmers familiar with the latest era, and only the latest era, will have a difficult time communicating with programmers of earlier eras.
Each era has objectives and constraints. Learn about those objectives and constraints, and you will find a deeper appreciation of programs and a greater ability to communicate with other programmers.
Monday, January 21, 2013
What is a PC?
It's a simple question -- "what is a PC?" -- yet the answer is complicated.
If we use Mr. Peabody's Wayback machine to travel to September 1981, the answer is simple. A "PC" (that is, a personal computer) is an IBM model 5150 with it's gray cover, detached keyboard (with 83 keys), and either an IBM Color Display (5153) or an IBM Monochrome Display (5151). It has an Intel 8088 processor, probably one or two floppy disk units, and a video adapter card.
At that time, that was a PC. Any other equipment was not. The PC name was strongly associated with IBM.
Over time, the concept of "PC" expanded. IBM introduced the IBM PC XT (model 5160), which meant that there were *two* models of IBM PC.
IBM introduced adapters for memory and ports. Other vendors did also. Compaq introduced their portable PC, fighting (and eventually winning) the battle for a compatible BIOS. Hercules made a video adapter that displayed graphics on monochrome displays (the IBM monochrome display adapter displayed only text).
In 1984 IBM introduced the IBM PC AT which used the Intel 80286 processor. Now there were three types of PCs from IBM, some with different processors, and bunches from other vendors. Some had more memory, some had different adapters. IBM introduced the Enhanced Graphics Adapter (EGA) with the IBM PC AT.
Through all of these changes, the two constants for PCs were this: they ran PC-DOS (or MS-DOS), and they ran Lotus 1-2-3. The operating system and that one application defined "PC". If the device ran PC-DOS and Lotus 1-2-3, it was a PC. If it did not, it was not. (And even this definition was not quite true, since several computers ran MS-DOS and special versions of Lotus 1-2-3, but were never considered to be "PC"s. The Zenith Z-100, for example.)
Moving forward to the early 1990s, our definition of PCs changed. It was no longer sufficient to run PC-DOS and Lotus 1-2-3. Instead, the criteria changed to Windows and Microsoft Office. Those were the defining characteristics of a PC. (Even in the late 1990s, when Compaq and Microsoft built the "Pocket PC", the device was considered a PC.)
Today, when we use the term "PC", we think of a set of devices. These include desktop computers, laptop computers, virtual computers running on servers, and now, with the Microsoft Surface, tablets. The operating system has expanded to include Linux (but not Mac OSX), and there is no definitive application. We use the phrases "Windows PC" and "Linux PC". Windows PCs must run Microsoft Windows and Microsoft Office, but a Linux PC needs only a version of Linux.
We have the puzzle of an Apple MacBook running Linux -- do we call this a PC? I am tending to think not. Apple's advertising and branding has been strong.
The one characteristic is that all of these devices require the user to be an administrator. The user must install new software, ensure updates are installed, and diagnose problems. This action separates a PC from a tablet. Tablets do not require the user to "install" software -- beyond selecting the software from a menu. Tablets do not require the user to be an administrator. Updates are applied automatically, or perhaps after a prompt. Network adapters do not need to be configured.
Let's take the dividing line between PCs and tablets as administration. Some might call it "ease of use".
Yet even this definition is less than clear. Apple's OSX is better at installing applications: just drag the install package to the "Applications" folder. Linux has made improvements too, with Ubuntu's "Software Center" that lets one pick an application and install it. Microsoft's Windows RT is quite close to Apple's iOS for iPhones and iPads, which are clearly not PCs.
Despite the lack of a bright line in devices and implementations, I believe that we will look back and consider PCs to require administration, and non-PCs (tablets, smartphones, etc.) to allow use without the administrator role.
So that's my answer: If you need an administrator, it's a PC. If you don't, then it isn't.
Maybe the answer isn't so complicated.
If we use Mr. Peabody's Wayback machine to travel to September 1981, the answer is simple. A "PC" (that is, a personal computer) is an IBM model 5150 with it's gray cover, detached keyboard (with 83 keys), and either an IBM Color Display (5153) or an IBM Monochrome Display (5151). It has an Intel 8088 processor, probably one or two floppy disk units, and a video adapter card.
At that time, that was a PC. Any other equipment was not. The PC name was strongly associated with IBM.
Over time, the concept of "PC" expanded. IBM introduced the IBM PC XT (model 5160), which meant that there were *two* models of IBM PC.
IBM introduced adapters for memory and ports. Other vendors did also. Compaq introduced their portable PC, fighting (and eventually winning) the battle for a compatible BIOS. Hercules made a video adapter that displayed graphics on monochrome displays (the IBM monochrome display adapter displayed only text).
In 1984 IBM introduced the IBM PC AT which used the Intel 80286 processor. Now there were three types of PCs from IBM, some with different processors, and bunches from other vendors. Some had more memory, some had different adapters. IBM introduced the Enhanced Graphics Adapter (EGA) with the IBM PC AT.
Through all of these changes, the two constants for PCs were this: they ran PC-DOS (or MS-DOS), and they ran Lotus 1-2-3. The operating system and that one application defined "PC". If the device ran PC-DOS and Lotus 1-2-3, it was a PC. If it did not, it was not. (And even this definition was not quite true, since several computers ran MS-DOS and special versions of Lotus 1-2-3, but were never considered to be "PC"s. The Zenith Z-100, for example.)
Moving forward to the early 1990s, our definition of PCs changed. It was no longer sufficient to run PC-DOS and Lotus 1-2-3. Instead, the criteria changed to Windows and Microsoft Office. Those were the defining characteristics of a PC. (Even in the late 1990s, when Compaq and Microsoft built the "Pocket PC", the device was considered a PC.)
Today, when we use the term "PC", we think of a set of devices. These include desktop computers, laptop computers, virtual computers running on servers, and now, with the Microsoft Surface, tablets. The operating system has expanded to include Linux (but not Mac OSX), and there is no definitive application. We use the phrases "Windows PC" and "Linux PC". Windows PCs must run Microsoft Windows and Microsoft Office, but a Linux PC needs only a version of Linux.
We have the puzzle of an Apple MacBook running Linux -- do we call this a PC? I am tending to think not. Apple's advertising and branding has been strong.
The one characteristic is that all of these devices require the user to be an administrator. The user must install new software, ensure updates are installed, and diagnose problems. This action separates a PC from a tablet. Tablets do not require the user to "install" software -- beyond selecting the software from a menu. Tablets do not require the user to be an administrator. Updates are applied automatically, or perhaps after a prompt. Network adapters do not need to be configured.
Let's take the dividing line between PCs and tablets as administration. Some might call it "ease of use".
Yet even this definition is less than clear. Apple's OSX is better at installing applications: just drag the install package to the "Applications" folder. Linux has made improvements too, with Ubuntu's "Software Center" that lets one pick an application and install it. Microsoft's Windows RT is quite close to Apple's iOS for iPhones and iPads, which are clearly not PCs.
Despite the lack of a bright line in devices and implementations, I believe that we will look back and consider PCs to require administration, and non-PCs (tablets, smartphones, etc.) to allow use without the administrator role.
So that's my answer: If you need an administrator, it's a PC. If you don't, then it isn't.
Maybe the answer isn't so complicated.
Friday, March 23, 2012
The default solution
For decades, mainframes were the default solution to computing problems. When you needed something done, you did it on a mainframe, unless you had a compelling reason for a different platform.
For decades, IBM called the shots in the computer industry. The popularity of IBM hardware gave IBM the ability to strongly influence (some might say dictate) hardware and software standards. That power diminished with the rise of personal computers (ironically helped by the IBM PC). IBM ceded the control of software to Microsoft, first with DOS and later with Windows.
For decades, PCs were the default solution to computing problems. When you needed something done, you did it on a PC, unless you had a compelling reason for a different platform.
For decades, Microsoft called the shots. The popularity of Windows and Office gave Microsoft the ability to strongly influence (some might say dictate) hardware and software standards. That power diminished with the rise of hand-held computers (specifically iPods and iPhones). Microsoft ceded the market to Apple, after several failed attempts at moving Windows to hand-sized devices.
Now, smartphones and tablets are the default solution to computing problems. When you need something done, you do it on a smartphone or tablet, unless you have a compelling reason for a different platform.
The popular platforms are the default solutions, and the company with the dominant platform can set the standards and the direction of the technology. Notice that it is the popular platform that defines the default solution, not the most cost-effective or the most reliable. The default solution is defined by the market, specifically what customers are buying. It is not a democracy, but neither is it an inherited rank. A company has a leadership role because the market gives that company the role.
And the market can take away that role.
The change in the market from mainframe to PC was an expansion, not a revolution.
The events that unseated IBM were not market revolutions, in which one competitor replaced another. IBM the mainframe manufacturer was not ousted by another mainframe manufacturer.They defended themselves against competitors, but failed to expand to new markets.
The PC revolution expanded the market. (It may have killed dedicated word processing systems, but overall it expanded the market.) The new market of word processing software, spreadsheets, and even primitive databases was something that IBM did not pursue with mainframes. It is possible that IBM was unable to pursue that market, as the PCs were small, inexpensive, and purchased by people who did not have a squadron of lawyers to review purchase and support contracts.
The market expanded but mainframes stayed constant, and that allowed PCs to become the default solution.
We have a similar situation with PCs and tablets.
The smartphone revolution (along with tablets) is expanding the market. The new market of location-aware apps, easy-to-install apps, and touchscreen interfaces is a market that Microsoft is only now beginning to pursue with Windows 8 and the Metro UI, and this effort is by no means guaranteed. (Many long-time supporters of Microsoft are grumbling at Windows 8.)
The market is expanding and PCs are mostly staying constant. That allows smartphones to become the default solution.
But PCs are not simply sitting still. PCs, and more specifically, PC operating systems, are adapting the ideas of the smartphone market. Microsoft's Windows 8 is the most prominent example of this effect, with its new GUI and the new Microsoft Windows App Store. Apple's "Lion" release of OSX bring it closer to smartphone operating systems. Some Linux distributions are morphing their user interfaces to something closer to smartphones and are simplifying their package managers.
In the end, I think PCs will have a limited role. Data centers have never been fond of the tower-style units, preferring rack-mounted servers and now preferring virtual PCs running on mainframes, of all things! Home users will find that smartphones and tablets less expensive, easier to use, and good enough to get the job done. Corporate users are the last bastion of PCs, and even they are looking at smartphones and tablets in the "Bring Your Own Device" movement.
PCs won't die out. Some tasks are handled by PCs better than on tablets. (Just as some tasks are handled by mainframes better than PCs, even today.) Some people will keep them because they are "tried and true" solutions, others will be unwilling to move to different platforms. Hobbyists will keep them out of nostalgia.
But they won't be the default solution.
For decades, IBM called the shots in the computer industry. The popularity of IBM hardware gave IBM the ability to strongly influence (some might say dictate) hardware and software standards. That power diminished with the rise of personal computers (ironically helped by the IBM PC). IBM ceded the control of software to Microsoft, first with DOS and later with Windows.
For decades, PCs were the default solution to computing problems. When you needed something done, you did it on a PC, unless you had a compelling reason for a different platform.
For decades, Microsoft called the shots. The popularity of Windows and Office gave Microsoft the ability to strongly influence (some might say dictate) hardware and software standards. That power diminished with the rise of hand-held computers (specifically iPods and iPhones). Microsoft ceded the market to Apple, after several failed attempts at moving Windows to hand-sized devices.
Now, smartphones and tablets are the default solution to computing problems. When you need something done, you do it on a smartphone or tablet, unless you have a compelling reason for a different platform.
The popular platforms are the default solutions, and the company with the dominant platform can set the standards and the direction of the technology. Notice that it is the popular platform that defines the default solution, not the most cost-effective or the most reliable. The default solution is defined by the market, specifically what customers are buying. It is not a democracy, but neither is it an inherited rank. A company has a leadership role because the market gives that company the role.
And the market can take away that role.
The change in the market from mainframe to PC was an expansion, not a revolution.
The events that unseated IBM were not market revolutions, in which one competitor replaced another. IBM the mainframe manufacturer was not ousted by another mainframe manufacturer.They defended themselves against competitors, but failed to expand to new markets.
The PC revolution expanded the market. (It may have killed dedicated word processing systems, but overall it expanded the market.) The new market of word processing software, spreadsheets, and even primitive databases was something that IBM did not pursue with mainframes. It is possible that IBM was unable to pursue that market, as the PCs were small, inexpensive, and purchased by people who did not have a squadron of lawyers to review purchase and support contracts.
The market expanded but mainframes stayed constant, and that allowed PCs to become the default solution.
We have a similar situation with PCs and tablets.
The smartphone revolution (along with tablets) is expanding the market. The new market of location-aware apps, easy-to-install apps, and touchscreen interfaces is a market that Microsoft is only now beginning to pursue with Windows 8 and the Metro UI, and this effort is by no means guaranteed. (Many long-time supporters of Microsoft are grumbling at Windows 8.)
The market is expanding and PCs are mostly staying constant. That allows smartphones to become the default solution.
But PCs are not simply sitting still. PCs, and more specifically, PC operating systems, are adapting the ideas of the smartphone market. Microsoft's Windows 8 is the most prominent example of this effect, with its new GUI and the new Microsoft Windows App Store. Apple's "Lion" release of OSX bring it closer to smartphone operating systems. Some Linux distributions are morphing their user interfaces to something closer to smartphones and are simplifying their package managers.
In the end, I think PCs will have a limited role. Data centers have never been fond of the tower-style units, preferring rack-mounted servers and now preferring virtual PCs running on mainframes, of all things! Home users will find that smartphones and tablets less expensive, easier to use, and good enough to get the job done. Corporate users are the last bastion of PCs, and even they are looking at smartphones and tablets in the "Bring Your Own Device" movement.
PCs won't die out. Some tasks are handled by PCs better than on tablets. (Just as some tasks are handled by mainframes better than PCs, even today.) Some people will keep them because they are "tried and true" solutions, others will be unwilling to move to different platforms. Hobbyists will keep them out of nostalgia.
But they won't be the default solution.
Subscribe to:
Posts (Atom)