BASIC is a language that, to quote Rodney Dangerfield, gets no respect.
Some have quipped that "those whom the gods would destroy... they first teach BASIC".
COBOL may be disparaged, but only to a limited extent. People, deep down, know that COBOL is running many useful system. (Things like banking, airline reservations, and perhaps most importantly payroll.) COBOL does work, and we respect it.
BASIC, on the other hand, tried to be useful but never really made it. Despite Microsoft's attempt with its MBASIC product, Digital Research with its CBASIC compiler, Digital Equipment Corporation with its various implementations of BASIC, and others, BASIC was always second place to other programming languages. For microcomputers, those languages were assembly language, Pascal, and C.
(I'm limiting this to the interpreter BASIC, the precursor to Visual Basic. Microsoft's Visual Basic was capable and popular. It was used for many serious applications, some of which are probably still running today.)
BASIC's challenge was its design. It was a language for learning the concepts of programming, not building large, serious programs. The name itself confirms this: Beginner's All-purpose Symbolic Instruction Code.
More than the name, the constructs of the programming language are geared for small programs. This is due to the purpose of BASIC (a better FORTRAN for casual users) and the timing of BASIC (the nascent "structured programming" movement had yet to prove itself).
Without the constructs structured programming ("while" loops and "if/then/else" statements), programmers must either build their programs with structured concepts made of smaller elements, or build unstructured programs. BASIC allows you to build structured programs, but provides no assistance. Worse, BASIC relies on GOTO to build most control flows.
In contrast, modern programming languages such as Java, C#, Python, and Ruby provide the constructs for structured programming and don't offer the GOTO statement.
The people who learned to program in BASIC (and I am one of them) learned to program poorly, and we have paid a heavy price for it.
But what does this have to do with Microsoft Excel?
Excel is the application taught to people for managing data. (Microsoft Word is suitable for documents, and Powerpoint is suitable for presentations, but Excel is *the* application for data. I suspect more people know and use Excel than all of the people using Word, Powerpoint, and Access.)
Excel offers the same undisciplined approach to applications. Spreadsheets contain data and formulas (and VBA macros, but I will ignore those for now).
One might argue that Excel is a spreadsheet, different from a programming language such as BASIC. Yet the differences are small. Excel, with its formulas alone, is a programming system if not a language.
The design of Excel (and other spreadsheets, going back to Visicalc) provides no support for structure or discipline. Formulas can collect data from anywhere in the spreadsheet. There is no GOTO keyword, but one can easily build a tangled mess.
Microsoft Excel is the new BASIC: useful, popular, and undisciplined. Worse than BASIC, since Excel is the premier tool for manipulating data. BASIC, for all of its flaws, was always second to some other language.
In one way, Excel is not as bad as BASIC. Formulas may collect data from any location in the spreadsheet, but they (for the most part) modify only their own contents. This provides a small amount of order to spreadsheet-programs.
We need a new paradigm for data management. Just as programming had its "structured programming" movement which lead to the use of constructs that improved the reliability and readability of programs, spreadsheets need a new approach to the organization of data and the types of formulas that can be used on that data.
Thursday, April 18, 2013
Tuesday, April 16, 2013
File Save No More
The new world of mobile/cloud is breaking many conventions of computer applications.
Take, for example, the long-established command to save a file. In Windows, this has been the menu option File / Save, or the keyboard shortcut CTRL-S.
Android apps do not have this sequence. In fact, they have no sequence to save data. Instead, they save your data as you enter it, or when you dismiss a dialog.
Not only Android apps (and I suspect iOS apps), but Google web apps exhibit this behavior too. Use Google Drive to create a document or a spreadsheet.
Breaking the "save file" concept allows for big changes. It lets us get rid of an operation. It lets us get rid of menus.
It also lets us get rid of the concept of a file. We don't need files in the cloud; we need data. This data can be stored in files (transparently to us), or in a database (also transparently to us), or in a NoSQL database (also transparently to us).
We don't care where the data is stored, or which container (filesystem or database) is used.
We do care about getting the data back.
I suspect that we will soon care about previous versions of our data.
Windows has add-ins for retrieving older versions of data. I have used a few, and they tend to be "hacks": things bolted on to Windows and clumsy to use. They don't save every version; instead, they keep snapshots at scheduled times.
Look for real version management in the cloud. Google, with its gigabytes of storage for each e-mail user, will be able to keep older versions of files. (Perhaps they are already doing it.)
The "File / Save" command will be replaced with the "File Versions" list, letting us retrieve an old version of the file. The list will show each and every revision of the file, not just the versions captured at scheduled times.
Once a major player offers this feature, other players will have to follow.
Take, for example, the long-established command to save a file. In Windows, this has been the menu option File / Save, or the keyboard shortcut CTRL-S.
Android apps do not have this sequence. In fact, they have no sequence to save data. Instead, they save your data as you enter it, or when you dismiss a dialog.
Not only Android apps (and I suspect iOS apps), but Google web apps exhibit this behavior too. Use Google Drive to create a document or a spreadsheet.
Breaking the "save file" concept allows for big changes. It lets us get rid of an operation. It lets us get rid of menus.
It also lets us get rid of the concept of a file. We don't need files in the cloud; we need data. This data can be stored in files (transparently to us), or in a database (also transparently to us), or in a NoSQL database (also transparently to us).
We don't care where the data is stored, or which container (filesystem or database) is used.
We do care about getting the data back.
I suspect that we will soon care about previous versions of our data.
Windows has add-ins for retrieving older versions of data. I have used a few, and they tend to be "hacks": things bolted on to Windows and clumsy to use. They don't save every version; instead, they keep snapshots at scheduled times.
Look for real version management in the cloud. Google, with its gigabytes of storage for each e-mail user, will be able to keep older versions of files. (Perhaps they are already doing it.)
The "File / Save" command will be replaced with the "File Versions" list, letting us retrieve an old version of the file. The list will show each and every revision of the file, not just the versions captured at scheduled times.
Once a major player offers this feature, other players will have to follow.
Monday, April 15, 2013
The (possible) return of chargebacks
Many moons ago, computers were large expensive beasts that required much care and attention. Since they were expensive, only large organizations could purchase (or lease) them, and those large organizations monitored their use. The companies and government agencies wanted to ensure that they were spending the right amount of money. A computer system had to provide the right amount of computations and storage; if it had excess capacity, you were spending too much.
Some time later (moons ago, but not so many moons) computers became relatively cheap. Personal computers were smaller, easier to use, and much less expensive. Most significantly, they had no way to monitor utilization. While some PCs were more powerful than others, there were no measurements of their actual use. It was common to use personal computers for only eight hours a day. (More horrifying to the mainframe efficiency monitors, some people left their PCs powered on overnight, when they were performing no work.)
Cloud technologies allow us to worry about utilization factors again. And we monitor them. This is a big change from the PC era. With PCs, we cared little for utilization rates.
Perhaps we monitor cloud technologies (virtual servers and such) because they are metered; we pay for every hour that they are active.
If we start worrying about utilization rates for cloud resources, I suspect that we will soon bring back another habit from the mainframe era: chargebacks. For those who do not remember them, chargebacks are mechanisms to charge end-user departments for computing resources. Banks, for example, would have a single mainframe used by different departments. With chargebacks, the bank's IT group allocates the expenses of the mainframe, its software, storage, and network usage to the user departments.
We did not have chargebacks with PCs, or with servers. It was a blissful era in which computing power was "too cheap to meter". (Or perhaps too difficult to meter.)
With cloud technologies, we may just see the return of chargebacks. We have the ability, and we probably will do it. Too many organizations will see it as a way of allocating costs to the true users.
I'm not sure that this is a good thing. Organizational "clients" of the IT group should worry about expenses, but chargebacks provide an incomplete picture of expenses. They are good at reporting the expenses incurred, but they deter cooperation across departments. (This builds silos within the organization, since I as a department manager do not want people from other departments using resources "on my dime".)
Chargebacks also force changes to project governance. New projects are viewed not only in the light of development costs, but also in the chargeback costs (which are typically the monthly operations costs). If these monthly costs are known, then this analysis is helpful. But if these costs are not known but merely estimated, political games can erupt between the department managers and the cost estimators.
I don't claim that chargebacks are totally evil. But chargebacks are not totally good, either. Like any tool, they can help or harm, depending on their use.
Some time later (moons ago, but not so many moons) computers became relatively cheap. Personal computers were smaller, easier to use, and much less expensive. Most significantly, they had no way to monitor utilization. While some PCs were more powerful than others, there were no measurements of their actual use. It was common to use personal computers for only eight hours a day. (More horrifying to the mainframe efficiency monitors, some people left their PCs powered on overnight, when they were performing no work.)
Cloud technologies allow us to worry about utilization factors again. And we monitor them. This is a big change from the PC era. With PCs, we cared little for utilization rates.
Perhaps we monitor cloud technologies (virtual servers and such) because they are metered; we pay for every hour that they are active.
If we start worrying about utilization rates for cloud resources, I suspect that we will soon bring back another habit from the mainframe era: chargebacks. For those who do not remember them, chargebacks are mechanisms to charge end-user departments for computing resources. Banks, for example, would have a single mainframe used by different departments. With chargebacks, the bank's IT group allocates the expenses of the mainframe, its software, storage, and network usage to the user departments.
We did not have chargebacks with PCs, or with servers. It was a blissful era in which computing power was "too cheap to meter". (Or perhaps too difficult to meter.)
With cloud technologies, we may just see the return of chargebacks. We have the ability, and we probably will do it. Too many organizations will see it as a way of allocating costs to the true users.
I'm not sure that this is a good thing. Organizational "clients" of the IT group should worry about expenses, but chargebacks provide an incomplete picture of expenses. They are good at reporting the expenses incurred, but they deter cooperation across departments. (This builds silos within the organization, since I as a department manager do not want people from other departments using resources "on my dime".)
Chargebacks also force changes to project governance. New projects are viewed not only in the light of development costs, but also in the chargeback costs (which are typically the monthly operations costs). If these monthly costs are known, then this analysis is helpful. But if these costs are not known but merely estimated, political games can erupt between the department managers and the cost estimators.
I don't claim that chargebacks are totally evil. But chargebacks are not totally good, either. Like any tool, they can help or harm, depending on their use.
Saturday, April 13, 2013
Higher-level constructs can be your friends
Lee Brodie, inventor of the Forth language, once said: "I wouldn't write programs in Forth. I would build a new language in Forth, one suitable for the problem at hand. Then I would write the program in that language."
Or something like that.
The idea is a good one. Programming languages are fairly low level, dealing with small-grain concepts like 'int' and 'char'. Building a higher level of abstraction helps you focus on the task at hand, and worry less about details.
We have implemented this tactically with several programming constructs.
First were libraries: blocks of commonly used functions. All modern languages have "the standard libraries", from C to C++ to Java to C# to Python.
Object-oriented programming languages were another step, tactically. They promised the ability to "represent real-world concepts" in programs.
Today we use "domain specific languages". The idea is the same -- tactically.
I keep qualifying my statements with "tactically" because for all of our efforts, we (the programming industry, as a whole) tend to not use them. Not at a strategic level.
We use the common libraries. Any C programmer has used the strxxx() functions, along with the printf() and scanf() family of functions. C++, Java, and C# programmers use the standard-issue libraries and APIs for those languages. We're good at using what is put in front of us.
But very few projects use any of these techniques (libraries, classes, and DSLs) to create higher levels of objects. Most projects use the basic language and the constructs from the supplied framework (MFC, WPF, Struts, etc.) but build very little above these levels.
Instead of following Leo Brodie's advice, projects take the language, libraries, and frameworks and build with those constructs -- and only those constructs. The code of the application -- specifically the business logic -- is built in language-level constructs. There is (often) no effort in building libraries or classes that raise the code to a level closer to business logic.
This low-level design leads to long-term problems. The biggest problem is complexity, in the form of code that manipulates ideas at multiple levels. Code that calculates mortgage interest, for example, includes logic for manipulating arrays of numbers for payment amounts and payment dates. The result is that code is hard to understand: When reading the code, one must mentally jump up and down from one level of abstraction to another.
A second problem is the use of supplied objects for something similar. In Windows MFC programs, many systems used CString objects to hold directory and file names. This is convenient for the initial programmer, but painful for the programmers who follow. A CString object was not a file name, and had operations that made no sense for file names. (The later .NET framework provided much better support for file and path names.) Here, when reading the code, one must constantly remind oneself that the object in view is not used as a normal object of the given type, but as a special case with only certain operations allowed. This imposes work on the reader.
Given these costs of using low-level constructs, why do we avoid higher-level constructs?
I have a few ideas:
Building higher-level constructs is hard: It takes time and effort to build good (that is, useful and enduring) constructs. It is much easier (and faster) to build a program with the supplied objects.
Constructs require maintenance: Once "completed", constructs must be modified as the business changes. (Code built from low-level objects needs to be modified too, but managers and programmers seem more accepting of these changes.)
Obvious ramp-up time: Business-specific high-level constructs are specific to that business, and new hires must learn them. But with programs built with low-level constructs, new hires can be productive immediately, since the system is all "common, standard" code. (Systems have non-obvious ramp-up time, as new hires "learn the business", but managers seem to accept -- or ignore -- that cost.)
Can create politics: Home-grown libraries and frameworks can create political conflicts, driven by ego or overly-aggressive schedules. This is especially possible when the library becomes a separate project, used by other (client) projects. The manager of the library project must work well with the managers of the client projects.
These are not technical problems. These challenges are to the management of the projects. (To some extent, there are technical challenges in the design of the library/class/framework, but these are small compared to the managerial issues.)
I still like Leo Brodie's idea. We can do better than we currently do. We can build systems with better levels of abstraction.
Or something like that.
The idea is a good one. Programming languages are fairly low level, dealing with small-grain concepts like 'int' and 'char'. Building a higher level of abstraction helps you focus on the task at hand, and worry less about details.
We have implemented this tactically with several programming constructs.
First were libraries: blocks of commonly used functions. All modern languages have "the standard libraries", from C to C++ to Java to C# to Python.
Object-oriented programming languages were another step, tactically. They promised the ability to "represent real-world concepts" in programs.
Today we use "domain specific languages". The idea is the same -- tactically.
I keep qualifying my statements with "tactically" because for all of our efforts, we (the programming industry, as a whole) tend to not use them. Not at a strategic level.
We use the common libraries. Any C programmer has used the strxxx() functions, along with the printf() and scanf() family of functions. C++, Java, and C# programmers use the standard-issue libraries and APIs for those languages. We're good at using what is put in front of us.
But very few projects use any of these techniques (libraries, classes, and DSLs) to create higher levels of objects. Most projects use the basic language and the constructs from the supplied framework (MFC, WPF, Struts, etc.) but build very little above these levels.
Instead of following Leo Brodie's advice, projects take the language, libraries, and frameworks and build with those constructs -- and only those constructs. The code of the application -- specifically the business logic -- is built in language-level constructs. There is (often) no effort in building libraries or classes that raise the code to a level closer to business logic.
This low-level design leads to long-term problems. The biggest problem is complexity, in the form of code that manipulates ideas at multiple levels. Code that calculates mortgage interest, for example, includes logic for manipulating arrays of numbers for payment amounts and payment dates. The result is that code is hard to understand: When reading the code, one must mentally jump up and down from one level of abstraction to another.
A second problem is the use of supplied objects for something similar. In Windows MFC programs, many systems used CString objects to hold directory and file names. This is convenient for the initial programmer, but painful for the programmers who follow. A CString object was not a file name, and had operations that made no sense for file names. (The later .NET framework provided much better support for file and path names.) Here, when reading the code, one must constantly remind oneself that the object in view is not used as a normal object of the given type, but as a special case with only certain operations allowed. This imposes work on the reader.
Given these costs of using low-level constructs, why do we avoid higher-level constructs?
I have a few ideas:
Building higher-level constructs is hard: It takes time and effort to build good (that is, useful and enduring) constructs. It is much easier (and faster) to build a program with the supplied objects.
Constructs require maintenance: Once "completed", constructs must be modified as the business changes. (Code built from low-level objects needs to be modified too, but managers and programmers seem more accepting of these changes.)
Obvious ramp-up time: Business-specific high-level constructs are specific to that business, and new hires must learn them. But with programs built with low-level constructs, new hires can be productive immediately, since the system is all "common, standard" code. (Systems have non-obvious ramp-up time, as new hires "learn the business", but managers seem to accept -- or ignore -- that cost.)
Can create politics: Home-grown libraries and frameworks can create political conflicts, driven by ego or overly-aggressive schedules. This is especially possible when the library becomes a separate project, used by other (client) projects. The manager of the library project must work well with the managers of the client projects.
These are not technical problems. These challenges are to the management of the projects. (To some extent, there are technical challenges in the design of the library/class/framework, but these are small compared to the managerial issues.)
I still like Leo Brodie's idea. We can do better than we currently do. We can build systems with better levels of abstraction.
Wednesday, April 10, 2013
The Future of your Windows XP PC
Suppose you have one (or several) PCs running Windows XP. Microsoft has announced the end-of-life date for Windows XP (about a year from now). What to do?
You have several options:
Upgrade to Windows 8: This probably requires new hardware, since Windows 8 requires a bit more than Windows XP. If you want to use a touchscreen, you are looking at not upgrading your PC system to Windows 8 but replacing all of the hardware.
Windows 8 uses the new "Modern/Metro" UI which is a significant change from Windows XP. Your users may find the new interface unfamiliar.
Upgrade to Windows 7: Like Windows 8, Windows 7 probably requires new hardware. You are replacing your PC, not upgrading it. (Perhaps you keep the monitor, mouse, and keyboard.)
The UI in Windows 7 is closer to Windows XP, but there are still changes. The user experience is quite close to Windows XP; the system administrator will see the changes.
Switch to Mac: Instead of upgrading to Windows 8 or Windows 7, you can switch to an Apple Macintosh PC running OSX. This requires new versions of your software. Now you are replacing hardware and software, hardly a simple upgrade.
The user interface and administration of OSX is different from Windows, another cost of conversion.
Switch to Linux: Instead of upgrading to a version of Windows, you can switch to Linux. This is one option that lets you keep your current hardware. There are several Linux distros that are designed to run on limited hardware.
The Linux UI is different, but closer to Windows than Mac OSX, and can be tuned to look like Windows. Software may or may not be a challenge. The major browsers (except Internet Explorer) run on Linux. LibreOffice can replace Microsoft Office for most tasks. Commodity software be replaced with open source packages (GIMP for PhotoShop, for example). The WINE package can run some Windows applications, so you may be able to keep your custom (that is, non-commodity) software. (Or perhaps not; some software will run only on Windows.)
Keep Windows XP: This option may be missing from some consultant recommendations, but it is a possible path. There is nothing that will prevent you from running your existing hardware with your existing software. Windows XP has no self-destruct timer, and will continue to run after the "end of life" date.
But staying with Windows XP has costs. They are deferred costs, not immediate costs. They are gradual, not sharply defined. It is the "death by a thousand cuts" approach. You can keep running Windows XP, but small things will break, and then larger things.
Here's what will probably happen:
You get no updates from Microsoft, and you don't have to apply them and reboot Windows. You may think that this is an improvement. It is, in that you don't lose time applying updates. The downside is that your system's vulnerabilities remain unfixed.
As other things in your environment change, you will find that the Windows XP system does not work with the new items. When you add a printer, the Windows XP system will not have a driver for the it. When your software update arrives (perhaps for Adobe Acrobat), the update will politely tell you that the new version is not supported under Windows XP. (If you are lucky, the update will tell you this *before* it modifies your system. Less fortunate folks will learn this only after the new software has been installed and refuses to run.)
New versions of browsers will fail to install. Stuck with old browsers, some web sites will give you warnings and complaints. Some web sites will fail in obvious ways. Others will fail in mysterious and frustrating ways -- perhaps not letting you log in, or complete a transaction.
Problems are not limited to hardware and software -- they can affect people, too.
Job candidates, upon learning that you use Windows XP, may decline to work with you. Some candidates may decline the job immediately. Others may hire on and then complain when you direct them to work with a Windows XP system.
Windows XP may be a problem when you look for system admins. Some may choose to work elsewhere, others may accept the job but demand higher rates. (And some seasoned sysadmins may be happy to work on an old friend.)
It may be that Windows XP (and corresponding applications) will act as a filter for your employees. Folks who want newer technologies will leave (or decline employment), folks who are comfortable with the older tech will stay (or hire on). Eventually many (if not all) of your staff will be familiar with older technologies and unfamiliar with new ones.
At some point, you will want to re-install Windows XP. Here you will encounter difficulties. Microsoft may (or may not) continue to support the activation servers for Windows XP. Without an activation code, Windows XP will not run. Even with the activation servers and codes, if you install on a new PC, Microsoft may reject the activation (thinking that you are attempting to exceed your license count). New hardware presents other problems: If the PC uses UEFI, it may fail to boot the Windows XP installer, which is not signed. If the PC has no CD drive, the Windows XP CD is useless.
You can stay with Windows XP, but the path is limited. Your system becomes fragile, dependent on a limited and shrinking set of technology. At some point, you will be forced to move to something else.
My advice: Move before you are forced to move. Move to a new operating system (and possibly new hardware) on your schedule, not on a schedule set by failing equipment. Migrations take time and require tests to ensure that the new equipment is working. You want to convert from Windows XP to your new environment with minimal risks and minimal disruptions.
You have several options:
Upgrade to Windows 8: This probably requires new hardware, since Windows 8 requires a bit more than Windows XP. If you want to use a touchscreen, you are looking at not upgrading your PC system to Windows 8 but replacing all of the hardware.
Windows 8 uses the new "Modern/Metro" UI which is a significant change from Windows XP. Your users may find the new interface unfamiliar.
Upgrade to Windows 7: Like Windows 8, Windows 7 probably requires new hardware. You are replacing your PC, not upgrading it. (Perhaps you keep the monitor, mouse, and keyboard.)
The UI in Windows 7 is closer to Windows XP, but there are still changes. The user experience is quite close to Windows XP; the system administrator will see the changes.
Switch to Mac: Instead of upgrading to Windows 8 or Windows 7, you can switch to an Apple Macintosh PC running OSX. This requires new versions of your software. Now you are replacing hardware and software, hardly a simple upgrade.
The user interface and administration of OSX is different from Windows, another cost of conversion.
Switch to Linux: Instead of upgrading to a version of Windows, you can switch to Linux. This is one option that lets you keep your current hardware. There are several Linux distros that are designed to run on limited hardware.
The Linux UI is different, but closer to Windows than Mac OSX, and can be tuned to look like Windows. Software may or may not be a challenge. The major browsers (except Internet Explorer) run on Linux. LibreOffice can replace Microsoft Office for most tasks. Commodity software be replaced with open source packages (GIMP for PhotoShop, for example). The WINE package can run some Windows applications, so you may be able to keep your custom (that is, non-commodity) software. (Or perhaps not; some software will run only on Windows.)
Keep Windows XP: This option may be missing from some consultant recommendations, but it is a possible path. There is nothing that will prevent you from running your existing hardware with your existing software. Windows XP has no self-destruct timer, and will continue to run after the "end of life" date.
But staying with Windows XP has costs. They are deferred costs, not immediate costs. They are gradual, not sharply defined. It is the "death by a thousand cuts" approach. You can keep running Windows XP, but small things will break, and then larger things.
Here's what will probably happen:
You get no updates from Microsoft, and you don't have to apply them and reboot Windows. You may think that this is an improvement. It is, in that you don't lose time applying updates. The downside is that your system's vulnerabilities remain unfixed.
As other things in your environment change, you will find that the Windows XP system does not work with the new items. When you add a printer, the Windows XP system will not have a driver for the it. When your software update arrives (perhaps for Adobe Acrobat), the update will politely tell you that the new version is not supported under Windows XP. (If you are lucky, the update will tell you this *before* it modifies your system. Less fortunate folks will learn this only after the new software has been installed and refuses to run.)
New versions of browsers will fail to install. Stuck with old browsers, some web sites will give you warnings and complaints. Some web sites will fail in obvious ways. Others will fail in mysterious and frustrating ways -- perhaps not letting you log in, or complete a transaction.
Problems are not limited to hardware and software -- they can affect people, too.
Job candidates, upon learning that you use Windows XP, may decline to work with you. Some candidates may decline the job immediately. Others may hire on and then complain when you direct them to work with a Windows XP system.
Windows XP may be a problem when you look for system admins. Some may choose to work elsewhere, others may accept the job but demand higher rates. (And some seasoned sysadmins may be happy to work on an old friend.)
It may be that Windows XP (and corresponding applications) will act as a filter for your employees. Folks who want newer technologies will leave (or decline employment), folks who are comfortable with the older tech will stay (or hire on). Eventually many (if not all) of your staff will be familiar with older technologies and unfamiliar with new ones.
At some point, you will want to re-install Windows XP. Here you will encounter difficulties. Microsoft may (or may not) continue to support the activation servers for Windows XP. Without an activation code, Windows XP will not run. Even with the activation servers and codes, if you install on a new PC, Microsoft may reject the activation (thinking that you are attempting to exceed your license count). New hardware presents other problems: If the PC uses UEFI, it may fail to boot the Windows XP installer, which is not signed. If the PC has no CD drive, the Windows XP CD is useless.
You can stay with Windows XP, but the path is limited. Your system becomes fragile, dependent on a limited and shrinking set of technology. At some point, you will be forced to move to something else.
My advice: Move before you are forced to move. Move to a new operating system (and possibly new hardware) on your schedule, not on a schedule set by failing equipment. Migrations take time and require tests to ensure that the new equipment is working. You want to convert from Windows XP to your new environment with minimal risks and minimal disruptions.
Labels:
hardware management,
linux,
Mac OSX,
system upgrades,
upgrades,
Windows 7,
Windows 8,
Windows XP
Sunday, April 7, 2013
Mobile/cloud apps will be different than PC apps
As a participant in the PC revolution, I was comfortable with the bright future of personal computers. I *knew* -- that is, I strongly believed -- that PCs were superior to mainframes.
It turned out that PCs were *different* from mainframes, but not necessarily superior.
Mainframe programs were, primarily, accounting systems. Oh, there were programs to compute ballistics tables, and programs for engineering and astronomy, and system utilities, but the big use of mainframe computers was accounting (general ledger, inventory, billing, payment processing, payables, receivables, and market forecasts). These uses were shaped by the entities that could afford mainframe computers (large corporations and governments) and the data that was most important to those organizations.
But the data was also shaped by technology. Computers read input on punch cards and stored data on magnetic tape. The batch processing systems were useful for certain types of processing and made efficient use of transactions and master files. Even when terminals were invented, the processing remained in batch mode.
Personal computers were more interactive than mainframes. They started with terminals and interactive applications. From the beginning, personal computers were used for tasks very different than the tasks of mainframe computers. The biggest applications for PCs were word processors and spreadsheets. (They still are today.)
Some "traditional" computer applications were ported to personal computers. There were (and still are) systems for accounting and database management. There were utility programs and programming languages: BASIC, FORTRAN, COBOL, and later C and Pascal. But the biggest applications were the interactive ones, the ones that broke from the batch processing mold of mainframe computing.
(I am simplifying greatly here. There were interactive programs for mainframes. The BASIC language was designed as an interactive environment for programming, on mainframe computers.)
I cannot help but think that the typical mainframe programmer, looking at the new personal computers that appeared in the late 1970s, could only puzzle at what possible advantage they could offer. Personal computers were smaller, slower, and less capable than mainframes in every degree. Processors were slower and less capable. Memory was smaller. Storage was laughably primitive. PC software was also primitive, with nothing approaching the sophistication of mainframe operating systems, database management systems, or utilities.
The only ways in which personal computers were superior to mainframes were the BASIC language (Microsoft BASIC was more powerful than mainframe BASIC), word processors, and spreadsheets. Notice that these are all interactive programs. The cost and size of a personal computer made it possible for a person to own one, but the interactive nature of applications made it sensible for a person to own one.
That single attribute of interactive applications made the PC revolution possible. The success of modern-day PCs and the Microsoft empire was built on interactive applications.
I suspect that the success of cell phones and tablets will be built on a single attribute. But what that attribute is, I do not know. It may be portability. It may be location-aware capabilities. It may be a different level of interactivity.
I *know* -- that is, I feel very strongly -- that mobile/cloud is going to have a brilliant future.
I also feel that the key applications for mobile/cloud will be different from traditional PC applications, just as PC applications are different from mainframe applications. Any attempt to port PC applications to mobile/cloud will be doomed to failure, just as mainframe applications failed to port to PCs.
Mainframe applications live on, in their batch mode glory, to this day. Large companies and governments need accounting systems, and will continue to need them. PC applications will live through the mobile/cloud revolution, although some may fade; PowerPoint-style presentations may be better served on synchronized mobile devices than with a single PC and a projector.
Expect mobile/cloud apps to surprise us. They will not be word processors and spreadsheets. (Nor will they be accounting systems.) They will be more like Twitter and Facebook, with status updates and connections to our network of people.
It turned out that PCs were *different* from mainframes, but not necessarily superior.
Mainframe programs were, primarily, accounting systems. Oh, there were programs to compute ballistics tables, and programs for engineering and astronomy, and system utilities, but the big use of mainframe computers was accounting (general ledger, inventory, billing, payment processing, payables, receivables, and market forecasts). These uses were shaped by the entities that could afford mainframe computers (large corporations and governments) and the data that was most important to those organizations.
But the data was also shaped by technology. Computers read input on punch cards and stored data on magnetic tape. The batch processing systems were useful for certain types of processing and made efficient use of transactions and master files. Even when terminals were invented, the processing remained in batch mode.
Personal computers were more interactive than mainframes. They started with terminals and interactive applications. From the beginning, personal computers were used for tasks very different than the tasks of mainframe computers. The biggest applications for PCs were word processors and spreadsheets. (They still are today.)
Some "traditional" computer applications were ported to personal computers. There were (and still are) systems for accounting and database management. There were utility programs and programming languages: BASIC, FORTRAN, COBOL, and later C and Pascal. But the biggest applications were the interactive ones, the ones that broke from the batch processing mold of mainframe computing.
(I am simplifying greatly here. There were interactive programs for mainframes. The BASIC language was designed as an interactive environment for programming, on mainframe computers.)
I cannot help but think that the typical mainframe programmer, looking at the new personal computers that appeared in the late 1970s, could only puzzle at what possible advantage they could offer. Personal computers were smaller, slower, and less capable than mainframes in every degree. Processors were slower and less capable. Memory was smaller. Storage was laughably primitive. PC software was also primitive, with nothing approaching the sophistication of mainframe operating systems, database management systems, or utilities.
The only ways in which personal computers were superior to mainframes were the BASIC language (Microsoft BASIC was more powerful than mainframe BASIC), word processors, and spreadsheets. Notice that these are all interactive programs. The cost and size of a personal computer made it possible for a person to own one, but the interactive nature of applications made it sensible for a person to own one.
That single attribute of interactive applications made the PC revolution possible. The success of modern-day PCs and the Microsoft empire was built on interactive applications.
I suspect that the success of cell phones and tablets will be built on a single attribute. But what that attribute is, I do not know. It may be portability. It may be location-aware capabilities. It may be a different level of interactivity.
I *know* -- that is, I feel very strongly -- that mobile/cloud is going to have a brilliant future.
I also feel that the key applications for mobile/cloud will be different from traditional PC applications, just as PC applications are different from mainframe applications. Any attempt to port PC applications to mobile/cloud will be doomed to failure, just as mainframe applications failed to port to PCs.
Mainframe applications live on, in their batch mode glory, to this day. Large companies and governments need accounting systems, and will continue to need them. PC applications will live through the mobile/cloud revolution, although some may fade; PowerPoint-style presentations may be better served on synchronized mobile devices than with a single PC and a projector.
Expect mobile/cloud apps to surprise us. They will not be word processors and spreadsheets. (Nor will they be accounting systems.) They will be more like Twitter and Facebook, with status updates and connections to our network of people.
Labels:
cloud,
mobile,
mobile/cloud,
PC revolution,
post-PC era
Thursday, April 4, 2013
Means of production and BYOD
In agrarian societies, the means of production is the farm: land of some size, crops, livestock, tools, seeds, workers, and capital.
In industrial societies, the means of production is the factory: land of some size, a building, tools, raw materials, power sources, access to transportation, and capital.
In the industrial age, capitalists owned the means of production. These things, the means of production, cost money. To be successful, capitalists had to be wealthy.
But the capitalists of yore missed something: You don't need to own all of the means of production. You need only a few key parts.
This should be obvious. No company is completely stand-alone. Companies outsource many activities, from payroll to the generation of electricity.
Apple has learned this lesson. Apple designs the products and hires other companies to build them.
In industrial societies, the means of production is the factory: land of some size, a building, tools, raw materials, power sources, access to transportation, and capital.
In the industrial age, capitalists owned the means of production. These things, the means of production, cost money. To be successful, capitalists had to be wealthy.
But the capitalists of yore missed something: You don't need to own all of the means of production. You need only a few key parts.
This should be obvious. No company is completely stand-alone. Companies outsource many activities, from payroll to the generation of electricity.
Apple has learned this lesson. Apple designs the products and hires other companies to build them.
The "Bring Your Own Device" idea (BYOD) is an extension of outsourcing. It pushes the ownership of some equipment onto workers. Instead of a company purchasing the PC, operating system, word processor, and spreadsheet for an employee, the employee acquires their own.
Shifting to BYOD means giving some measure of control (and responsibility) to employees. Workers can select their own device, their own operating system, and their own applications. Some businesses want to maintain control over these choices, and they miss the point of BYOD. They want to dictate the specific software that employees must use.
But, as a business owner who is outsourcing tasks to employees, do you care about the operating system they use? Do you care about the specific type of PC? Or the application (as long as you can read the file)?
BYOD is possible because the composition of documents and spreadsheets (and e-mails, and calendars) is not a key aspect of business. It's the ideas within those documents and spreadsheets that make the business run. It's the information that is the vital means of production.
For decades we have focussed on the hardware of computing: processor speed, memory capacity, storage size. I suppose that it started with tabulating machines, and the ability of one machine to process cards faster than another vendor's machine. It continued with mainframes, with minicomputers, and with PCs.
BYOD shows us that hardware is not a strategic advantage. Nor is commodity software -- anyone can have word processors and spreadsheets. Any company can have a web site.
The advantage is in data, in algorithms, and in ideas.
Subscribe to:
Posts (Atom)