User interfaces are about to become simpler.
This change is driven by the rise of mobile devices. The UI for mobile apps must be simpler. A cell phone has a small screen and (when needed) a virtual keyboard. The user interacts through the touchscreen, not a keyboard and mouse. Tablets, while larger and often accompanied by a real (small-form) keyboard, also interact through the touchscreen.
For years, PC applications have accumulated features and complexity. Consider the Microsoft Word and Microsoft Excel applications. Each version has introduced new features. The 2007 versions introduced the "ribbon menu", which was an adjustment to the UI to accommodate the increase.
Mobile devices force us to simplify the user interface. Indirectly, they force us to simplify applications. In the desktop world, the application with the most features was (generally) considered the best. In the mobile world, that calculation changes. Instead of selecting an application on the raw number of features, we are selecting applications on simplicity and ease of use.
It is a trend that is ironic, as the early versions of Microsoft Windows were advertised as easy to use (a common adjective was "intuitive"). Yet while "intuitive" and "easy", Windows was never designed to be simple; configuration and administration were always complex. That complexity remained even with networks and Active Directory -- the complexity was centralized but not eliminated.
Apps on mobile don't have to be simple, but simple apps are the better sellers. Simple apps fit better on the small screens. Simple apps fit better into the mobile/cloud processing model. Even games demonstrate this trend (compare "Angry Birds" against the PC games like "Doom" or even "Minesweeper").
The move to simple apps on mobile devices will flow back to web applications and PC applications. The trend of adding features will reverse. This will affect the development of applications and the use of technology in offices. Job requisitions will list user interface (UI) and user experience (UX) skills. Office workflows will become more granular. Large, enterprise systems (like ERP) will mutate into collections of apps and collections of services. This will allow mobile apps, web apps, and PC apps to access the corporate data and perform work.
Sellers of PC applications will have to simplify their current offerings. It is a change that will affect the user interface and the internal organization of their application. Such a change is non-trivial and requires some hard decisions. Some features may be dropped, others may be deferred to a future version. Every feature must be considered and placed in either the mobile client or the cloud back-end, and such decisions must account for many aspects of mobile/cloud design (network accessibility, storage, availability of data on multiple devices, among others).
Friday, May 31, 2013
Monday, May 27, 2013
Airships and software
Airships (that is, dirigibles, blimps, and balloons) and software have more in common than one might think. Yet we think of them as two very different things, and we even think about thinking about them in different ways.
Both airships and software must be engineered, and the designs must account for various trade-offs. For airships, one must consider the weight of the materials, the shape, and the size. A larger airship weighs more, yet has more buoyancy and can carry more cargo. Yet a larger airship is affected more by wind and is less maneuverable. Lighter materials tend to be less durable than heavy ones; the trade-off is long-term cost against short-term performance.
The design of software has trade-offs: some designs are cheaper to construct in the short term yet more expensive to maintain in the long term. Some programming languages allow for the better construction of a system -- and others even require it. Comparing C++ to Java, one can see that Java encourages better designs up front, while C++ merely allows for them.
I have observed a number of shops and a number of projects. Most (if not all) have given little thought to the programming language. The typical project picks a programming language based on the current knowledge of the people present on the team -- or possibly the latest "fad" language.
Selecting the programming language is important. More important than current knowledge or the current fad. I admit that learning a new language has a cost. Yet picking a language based on nothing more than team's current knowledge seems a poor way to run a project.
The effect does not stop at languages. We (as an industry) tend to use what we know for many aspects of projects: programming languages, database, front-end design, and even project management. If we (as an organization) have been using the waterfall process, we tend to keep using it. If we (as an organization) have been using an SQL database, we tend to keep using it.
Using "what we know" makes some sense. It is a reasonable course of action -- in some situations. But there comes a time when "the way we have always done it" does not work. There comes a time when new technologies are more cost-effective. Yet sticking with "what we know" means we have no experience with "the new stuff".
If we have no experience with the new technologies, how do we know when they are cost-effective?
We have (as an industry) been using relative measures of technologies. We don't know the absolute cost of the technologies for software. (We do know the absolute cost of technologies for airships -- an airship needs so many yards of material and weighs so much. It carries so much. It has so much resistance and so much wind shear.)
Actually, the problem is worse than relative measures. We have no measures at all (for many projects) and we rely on our gut feelings. A project manager picks the language and database and project process based on his feelings. There are no metrics!
I'm not sure why we treat software projects so differently from other engineering projects. I strongly believe that it cannot continue. The cost of picking the wrong language, the wrong database, the wrong management process will increase over time.
We had better start measuring things. The sooner we do, we can learn the right things to measure.
Both airships and software must be engineered, and the designs must account for various trade-offs. For airships, one must consider the weight of the materials, the shape, and the size. A larger airship weighs more, yet has more buoyancy and can carry more cargo. Yet a larger airship is affected more by wind and is less maneuverable. Lighter materials tend to be less durable than heavy ones; the trade-off is long-term cost against short-term performance.
The design of software has trade-offs: some designs are cheaper to construct in the short term yet more expensive to maintain in the long term. Some programming languages allow for the better construction of a system -- and others even require it. Comparing C++ to Java, one can see that Java encourages better designs up front, while C++ merely allows for them.
I have observed a number of shops and a number of projects. Most (if not all) have given little thought to the programming language. The typical project picks a programming language based on the current knowledge of the people present on the team -- or possibly the latest "fad" language.
Selecting the programming language is important. More important than current knowledge or the current fad. I admit that learning a new language has a cost. Yet picking a language based on nothing more than team's current knowledge seems a poor way to run a project.
The effect does not stop at languages. We (as an industry) tend to use what we know for many aspects of projects: programming languages, database, front-end design, and even project management. If we (as an organization) have been using the waterfall process, we tend to keep using it. If we (as an organization) have been using an SQL database, we tend to keep using it.
Using "what we know" makes some sense. It is a reasonable course of action -- in some situations. But there comes a time when "the way we have always done it" does not work. There comes a time when new technologies are more cost-effective. Yet sticking with "what we know" means we have no experience with "the new stuff".
If we have no experience with the new technologies, how do we know when they are cost-effective?
We have (as an industry) been using relative measures of technologies. We don't know the absolute cost of the technologies for software. (We do know the absolute cost of technologies for airships -- an airship needs so many yards of material and weighs so much. It carries so much. It has so much resistance and so much wind shear.)
Actually, the problem is worse than relative measures. We have no measures at all (for many projects) and we rely on our gut feelings. A project manager picks the language and database and project process based on his feelings. There are no metrics!
I'm not sure why we treat software projects so differently from other engineering projects. I strongly believe that it cannot continue. The cost of picking the wrong language, the wrong database, the wrong management process will increase over time.
We had better start measuring things. The sooner we do, we can learn the right things to measure.
Saturday, May 25, 2013
Best practices are not best forever
Technology changes quickly. And with changes in technology, our views of technology change, and these views affect our decisions on system design. Best practices in one decade may be inefficient in another.
A recent trip to the local car dealer made this apparent. I had brought my car in for routine service, and the mechanic and I reviewed the car's maintenance history. The dealer has a nice, automated system to record all maintenance on vehicles. It has an on-line display and prints nicely-formatted maintenance summaries. A "modern" computer system, probably designed in the 1980s and updated over the years. (I put the word "modern" in quotes because it clearly runs on a networked PC with a back end database, but it does not have tablet or phone apps.)
One aspect of this system is the management of data. After some amount of time (it looks like a few years), maintenance records are removed from the system.
Proper system design once included the task of storage management. A "properly" designed system (one that followed "best practices") would manage data for the users. Data would be retained for a period of time but not forever. One had to erase information, because the total available space was fixed (or additional space was prohibitively expensive) and programming the system to manage space was more effective that asking people to erase the right data at the right time. (People tend to wait until all free storage is used and then binge-erase more data than necessary.)
That was the best practice -- at the time.
Over time, the cost of storage dropped. And over time, our perception of the cost of storage dropped.
Google has a big role in our new perception. With the introduction of GMail, Google gave each account holder a full gigabyte of storage. A full gigabyte! The announcement shocked the industry. Today, it is a poor e-mail service that cannot promise a gigabyte of storage.
Now, Flickr is giving each account holder a full terabyte of storage. A full terabyte! Even I am surprised at the decision. (I also think that it is a good marketing move.)
Let's return to the maintenance tracking system used by the car dealer.
Such quantities of storage vastly surpass the meager storage used by a few maintenance records. Maintenance records each take a few kilobytes of data (it's all text, and only a few pages). A full megabyte of data would hold all maintenance records for several hundred repairs and check-ups. If the auto dealer assigned a full gigabyte to each customer, they could easily hold all maintenance records for the customer, even if the customer brought the car for repairs every month for an extended car-life of twenty years!
Technology has changed. Storage has become inexpensive. Today, it would be a poor practice to design a system that auto-purges records. You spend more on the code and the tests than you save on the reduction in storage costs. You lose older customer data, preventing you from analyzing trends over time.
The new best practices of big data, data science, and analytics, require data. Old data has value, and the value is more than the cost of storage.
Best practices change over time. Be prepared for changes.
A recent trip to the local car dealer made this apparent. I had brought my car in for routine service, and the mechanic and I reviewed the car's maintenance history. The dealer has a nice, automated system to record all maintenance on vehicles. It has an on-line display and prints nicely-formatted maintenance summaries. A "modern" computer system, probably designed in the 1980s and updated over the years. (I put the word "modern" in quotes because it clearly runs on a networked PC with a back end database, but it does not have tablet or phone apps.)
One aspect of this system is the management of data. After some amount of time (it looks like a few years), maintenance records are removed from the system.
Proper system design once included the task of storage management. A "properly" designed system (one that followed "best practices") would manage data for the users. Data would be retained for a period of time but not forever. One had to erase information, because the total available space was fixed (or additional space was prohibitively expensive) and programming the system to manage space was more effective that asking people to erase the right data at the right time. (People tend to wait until all free storage is used and then binge-erase more data than necessary.)
That was the best practice -- at the time.
Over time, the cost of storage dropped. And over time, our perception of the cost of storage dropped.
Google has a big role in our new perception. With the introduction of GMail, Google gave each account holder a full gigabyte of storage. A full gigabyte! The announcement shocked the industry. Today, it is a poor e-mail service that cannot promise a gigabyte of storage.
Now, Flickr is giving each account holder a full terabyte of storage. A full terabyte! Even I am surprised at the decision. (I also think that it is a good marketing move.)
Let's return to the maintenance tracking system used by the car dealer.
Such quantities of storage vastly surpass the meager storage used by a few maintenance records. Maintenance records each take a few kilobytes of data (it's all text, and only a few pages). A full megabyte of data would hold all maintenance records for several hundred repairs and check-ups. If the auto dealer assigned a full gigabyte to each customer, they could easily hold all maintenance records for the customer, even if the customer brought the car for repairs every month for an extended car-life of twenty years!
Technology has changed. Storage has become inexpensive. Today, it would be a poor practice to design a system that auto-purges records. You spend more on the code and the tests than you save on the reduction in storage costs. You lose older customer data, preventing you from analyzing trends over time.
The new best practices of big data, data science, and analytics, require data. Old data has value, and the value is more than the cost of storage.
Best practices change over time. Be prepared for changes.
Labels:
best practices,
cost/benefit,
new technology,
storage,
system design
Monday, May 20, 2013
Where do COBOL programmers come from?
In the late Twentieth Century, COBOL was the standard language for business applications. There were a few other contenders (IBM's RPG, assembly language, and DEC's DIBOL) but COBOL was the undisputed king of the business world. If you were running a business, you used COBOL.
If you worked in the data processing shop of a business, you knew COBOL and programmed with it.
If you were in school, you had a pretty good chance of being taught COBOL. Not everywhere, and not during the entire second half of the century. I attended an engineering school; we learned FORTRAN, Pascal, and assembly language. (We also used the packages SPSS and CSMP.)
Schools have, for the most part, stopped teaching COBOL. A few do, but most moved on to C++, or Java, or C#. A number are now teaching Python.
Business have lots of COBOL code. Lots and lots of it. And they have no reason to convert that code to C++, or Java, or C#, or the "flavor of the month" in programming languages. Business code is often complex and working business code is precious. One modifies the code only when necessary, and one converts a system to a new language only at the utmost need.
But that code, while precious, does have to be maintained. Businesses change and those changes require fixes and enhancements to the code.
Those changes and enhancements are made by COBOL programmers.
Of which very few are being minted these days. Or for the past two decades.
Which means that COBOL programmers are, as a resource, dwindling.
Now, I recognize that the production of COBOL programmers has not ceased. There are three sources that I can name with little thought.
First are the schools (real-life and on-line) that offer courses in COBOL. Several colleges still teach it, and several on-line colleges offer it.
Second is offshore programming companies. Talent is available through outsourcing.
Third is existing programmers who learn COBOL. A programmer who knows Visual Basic and C++, for example, may choose to learn COBOL (perhaps through an on-line college).
Yet I believe that, in any given year, the number of new COBOL programmers is less than the number of retiring COBOL programmers. Which means that the talent pool is now at risk, and therefore business applications may be at risk.
For many years businesses relied on the ubiquitous nature of COBOL to build their systems. I'm sure that the managers considered COBOL to be a "safe" language: stable and reliable for many years. And to be fair, it was. COBOL has been a useful language for almost half a century, a record that only FORTRAN can challenge.
The dominance of COBOL drove a demand for COBOL programmers, which in turn drove a demand for COBOL training. Now, competing languages are pulling talent out of the "COBOL pool", starving the training. Can businesses be far behind?
If you are running a business, and you rely on COBOL, you may want to think about the future of your programming talent.
* * * * *
Such an effect is not limited to COBOL. It can happen to any popular language. Consider Visual Basic, a dominant language in Windows shops in the 1990s. It has fallen out of favor, replaced by C#. Or consider C++, which like COBOL has a large base of installed (and working) code. It, too, is falling out of favor, albeit much more slowly than Visual Basic or COBOL.
If you worked in the data processing shop of a business, you knew COBOL and programmed with it.
If you were in school, you had a pretty good chance of being taught COBOL. Not everywhere, and not during the entire second half of the century. I attended an engineering school; we learned FORTRAN, Pascal, and assembly language. (We also used the packages SPSS and CSMP.)
Schools have, for the most part, stopped teaching COBOL. A few do, but most moved on to C++, or Java, or C#. A number are now teaching Python.
Business have lots of COBOL code. Lots and lots of it. And they have no reason to convert that code to C++, or Java, or C#, or the "flavor of the month" in programming languages. Business code is often complex and working business code is precious. One modifies the code only when necessary, and one converts a system to a new language only at the utmost need.
But that code, while precious, does have to be maintained. Businesses change and those changes require fixes and enhancements to the code.
Those changes and enhancements are made by COBOL programmers.
Of which very few are being minted these days. Or for the past two decades.
Which means that COBOL programmers are, as a resource, dwindling.
Now, I recognize that the production of COBOL programmers has not ceased. There are three sources that I can name with little thought.
First are the schools (real-life and on-line) that offer courses in COBOL. Several colleges still teach it, and several on-line colleges offer it.
Second is offshore programming companies. Talent is available through outsourcing.
Third is existing programmers who learn COBOL. A programmer who knows Visual Basic and C++, for example, may choose to learn COBOL (perhaps through an on-line college).
Yet I believe that, in any given year, the number of new COBOL programmers is less than the number of retiring COBOL programmers. Which means that the talent pool is now at risk, and therefore business applications may be at risk.
For many years businesses relied on the ubiquitous nature of COBOL to build their systems. I'm sure that the managers considered COBOL to be a "safe" language: stable and reliable for many years. And to be fair, it was. COBOL has been a useful language for almost half a century, a record that only FORTRAN can challenge.
The dominance of COBOL drove a demand for COBOL programmers, which in turn drove a demand for COBOL training. Now, competing languages are pulling talent out of the "COBOL pool", starving the training. Can businesses be far behind?
If you are running a business, and you rely on COBOL, you may want to think about the future of your programming talent.
* * * * *
Such an effect is not limited to COBOL. It can happen to any popular language. Consider Visual Basic, a dominant language in Windows shops in the 1990s. It has fallen out of favor, replaced by C#. Or consider C++, which like COBOL has a large base of installed (and working) code. It, too, is falling out of favor, albeit much more slowly than Visual Basic or COBOL.
Sunday, May 19, 2013
The real reason we are angry with Windows 8
Windows 8 has made a splash, and different people have different reactions. Some are happy, some are confused, and some are angry.
The PC revolution was about control, and about independence. PCs, in the early days, were about "sticking it to the man" -- being independent from the big guys. Owning a PC (or a microcomputer) meant that we were the masters of our fate. We controlled the machine. We decided what to do with it. We decided when to use it.
But that absolute control has been eroded over time.
We have gradually, slowly, gave up our control in exchange for conveniences.
Now, we have come to the realization that we are not in control of our computers. Our iPads and Android tablets update themselves, and we lack total control (unless we jail-break them).
I think what really makes people mad is the realization that they are not in control. We thought that we were in control, but we're not. The vendor calls the shots.
We thought that we were in control. We thought that we called the shots.
Windows 8, with its new user interface and its new approach to apps, makes it clear that we are not.
And we're angry when we realize it.
The PC revolution was about control, and about independence. PCs, in the early days, were about "sticking it to the man" -- being independent from the big guys. Owning a PC (or a microcomputer) meant that we were the masters of our fate. We controlled the machine. We decided what to do with it. We decided when to use it.
But that absolute control has been eroded over time.
- With CP/M (and later with MS-DOS and even later with Windows), we agreed to use a common operating system in exchange for powerful applications.
- With Wordstar (and later with Lotus 1-2-3 and even later with Word and Excel) we agreed to use common applications in exchange for the ability to share documents and spreadsheets.
- With Windows 3.1, we agreed to use the Microsoft stack in exchange for network drivers and access to servers.
- With Windows 2000 SP3, we had to accept updates from Microsoft. (The license specified that.)
We have gradually, slowly, gave up our control in exchange for conveniences.
Now, we have come to the realization that we are not in control of our computers. Our iPads and Android tablets update themselves, and we lack total control (unless we jail-break them).
I think what really makes people mad is the realization that they are not in control. We thought that we were in control, but we're not. The vendor calls the shots.
We thought that we were in control. We thought that we called the shots.
Windows 8, with its new user interface and its new approach to apps, makes it clear that we are not.
And we're angry when we realize it.
Thursday, May 16, 2013
Life without files
Mobile devices (tablets and phones) are different from PCs in that they do not use files. Yes, they have operating systems that use files, and Android has file managers which let you access some or all files on the device, but the general experience is one without files.
The PC experience was centered around files. Files were the essential element of a PC, from the boot files to word processor documents to temporary data. Everything was stored in a file. The experience of using a PC was running an application, loading data from a file, and manipulating that data. When finished, one saved the information to a file.
Unix (and Linux) also have this approach. Microsoft Windows presented data in a series of graphical windows, but it stored data in files. The "File" menu with its options of "New", "Open", "Save", "Save As", and "Exit" (oddly) were standard in the UI.
The mobile experience relies on apps, not files. The user runs an app, and the app somehow knows how to get its data. Much of the time, that data is stored in the cloud, on servers maintained by the app provider. There is no "File" menu -- or any menu.
Shifting the experience from files to apps is a subtle but significant change. One tends to miss it, since keeping track of files was tedious and required discipline for structuring directories. When apps load their data automatically, one doesn't really mind not worrying about file names and locations.
With the "loss" of files, or more specifically the access to app data outside of the app, one is more conscious of sharing information between apps. In the PC age, one didn't share information between applications -- one stored data in a public place and let other applications have their way with it.
This "data available to all" makes some sense, especially for developers. Developers use multiple tools on their data -- source code -- and need it shared liberally. Editors, compilers, debuggers, lint utilities, and metrics tools all need unfettered access to the source code.
But the "free for all" model does not work so well for other data. In most shops, data is used in a single application (most often Microsoft Word or Microsoft Excel) and occasionally sent via e-mail to others. But this latter act is simply sharing the data; a better way to share data could eliminate a lot of e-mails.
Sharing data and allowing others to update it (or not) requires an infrastructure, a mechanism to make data available and control access. It requires data to be structured in a way that it can be shared -- which may be different from the internal format used by the app. Yet defining an external format may be good for us in general: we can eliminate the task of reverse-engineering proprietary file formats.
Life without files. Because we want to get work done, not read and save files.
The PC experience was centered around files. Files were the essential element of a PC, from the boot files to word processor documents to temporary data. Everything was stored in a file. The experience of using a PC was running an application, loading data from a file, and manipulating that data. When finished, one saved the information to a file.
Unix (and Linux) also have this approach. Microsoft Windows presented data in a series of graphical windows, but it stored data in files. The "File" menu with its options of "New", "Open", "Save", "Save As", and "Exit" (oddly) were standard in the UI.
The mobile experience relies on apps, not files. The user runs an app, and the app somehow knows how to get its data. Much of the time, that data is stored in the cloud, on servers maintained by the app provider. There is no "File" menu -- or any menu.
Shifting the experience from files to apps is a subtle but significant change. One tends to miss it, since keeping track of files was tedious and required discipline for structuring directories. When apps load their data automatically, one doesn't really mind not worrying about file names and locations.
With the "loss" of files, or more specifically the access to app data outside of the app, one is more conscious of sharing information between apps. In the PC age, one didn't share information between applications -- one stored data in a public place and let other applications have their way with it.
This "data available to all" makes some sense, especially for developers. Developers use multiple tools on their data -- source code -- and need it shared liberally. Editors, compilers, debuggers, lint utilities, and metrics tools all need unfettered access to the source code.
But the "free for all" model does not work so well for other data. In most shops, data is used in a single application (most often Microsoft Word or Microsoft Excel) and occasionally sent via e-mail to others. But this latter act is simply sharing the data; a better way to share data could eliminate a lot of e-mails.
Sharing data and allowing others to update it (or not) requires an infrastructure, a mechanism to make data available and control access. It requires data to be structured in a way that it can be shared -- which may be different from the internal format used by the app. Yet defining an external format may be good for us in general: we can eliminate the task of reverse-engineering proprietary file formats.
Life without files. Because we want to get work done, not read and save files.
Tuesday, May 14, 2013
IT must be strategic as well as tactical
Anyone running an IT shop (or running a company) must divide their efforts for IT into two categories: strategic and tactical.
Strategic efforts help the company in, well, strategic ways. They introduce new products to counter (or surpass) the competition. They create new markets. They do big, bold things -- sometimes risky -- for big returns.
Tactical efforts also help the company, but it smaller and less flashier ways. They improve internal processes. They reduce waste. They increase operating efficiencies. They do not do big, bold things, but instead do small, meek things for small returns.
But to do big, bold things you need a team that has got it's act together. You need the infrastructure in place, and you need a team that can build, operate, and maintain that infrastructure -- while implementing the big strategic stuff.
If your networks are reliable, if your storage is planned, allocated, monitored, and available, if your people have access to the information they need (and only that information), if your systems are updated on time, if your sysadmins have the right tools and your developers have the right tools and your analysts have the right tools and your sales people have the right tools (and they all know how to use them), then you probably have the tactical side working. (I say 'probably' because there are other things that can cause problems. A complete list would be larger than you would want to read in a blog post.)
If you're having problems with the strategic items, look to see how well the tacticals are doing. If *they* are having problems, start fixing those. And by fixing, I mean get the right people on the team, give them the authority to do their jobs, and fund them to get tools and technologies in place. Make sure that your people can get their "real work" done before you start doing grand things.
If the tacticals are working and the strategic items are not, then you don't have a technology problem. (Well, you might, if your strategy is to build something that is not feasible. But even then you would know. When your tacticals are working you have competent people who will tell you your strategy is not possible.)
Bottom line: get the little things working and you will have a reliable platform for larger efforts.
Strategic efforts help the company in, well, strategic ways. They introduce new products to counter (or surpass) the competition. They create new markets. They do big, bold things -- sometimes risky -- for big returns.
Tactical efforts also help the company, but it smaller and less flashier ways. They improve internal processes. They reduce waste. They increase operating efficiencies. They do not do big, bold things, but instead do small, meek things for small returns.
But to do big, bold things you need a team that has got it's act together. You need the infrastructure in place, and you need a team that can build, operate, and maintain that infrastructure -- while implementing the big strategic stuff.
If your networks are reliable, if your storage is planned, allocated, monitored, and available, if your people have access to the information they need (and only that information), if your systems are updated on time, if your sysadmins have the right tools and your developers have the right tools and your analysts have the right tools and your sales people have the right tools (and they all know how to use them), then you probably have the tactical side working. (I say 'probably' because there are other things that can cause problems. A complete list would be larger than you would want to read in a blog post.)
If you're having problems with the strategic items, look to see how well the tacticals are doing. If *they* are having problems, start fixing those. And by fixing, I mean get the right people on the team, give them the authority to do their jobs, and fund them to get tools and technologies in place. Make sure that your people can get their "real work" done before you start doing grand things.
If the tacticals are working and the strategic items are not, then you don't have a technology problem. (Well, you might, if your strategy is to build something that is not feasible. But even then you would know. When your tacticals are working you have competent people who will tell you your strategy is not possible.)
Bottom line: get the little things working and you will have a reliable platform for larger efforts.
Labels:
project governance,
project management,
strategy,
tactics
Wednesday, May 8, 2013
Computers are temples, but tablets are servants
There is an interesting psychological difference between "real" computers (mainframes, servers, and PCs) and the smartphones and tablets we use for mobile computing.
In short, "real" computers are temples of worship, and mobile computers are servants.
Mainframe computers have long seen the metaphor of religion, with their attendants referenced as "high priests". Personal computers have not seen such comparisons, but I think the "temple" metaphor holds. (Or perhaps we should say "shrine".)
"Real" computers are non-mobile. They are fixed in place. When we use a computer, we go to the computer. The one exception is laptops, which we can consider to be a portable shrine.
Mobile computers, in contrast, come with us. We do not go to them; they are nearby and ready for our requests.
Tablets and smartphones are intimate. They come with us to the grocery store, the exercise club, and the library. Mainframes of course do not come with us anywhere, and personal computers stay at home. Laptops occasionally come with us, but only with significant effort. (Carry bag, laptop, power adapter and cable, extra VGA cable, VGA-DVI adapter, and goodness-know-what.)
It's nice to visit a temple, but it's nicer to have a ready and capable servant.
In short, "real" computers are temples of worship, and mobile computers are servants.
Mainframe computers have long seen the metaphor of religion, with their attendants referenced as "high priests". Personal computers have not seen such comparisons, but I think the "temple" metaphor holds. (Or perhaps we should say "shrine".)
"Real" computers are non-mobile. They are fixed in place. When we use a computer, we go to the computer. The one exception is laptops, which we can consider to be a portable shrine.
Mobile computers, in contrast, come with us. We do not go to them; they are nearby and ready for our requests.
Tablets and smartphones are intimate. They come with us to the grocery store, the exercise club, and the library. Mainframes of course do not come with us anywhere, and personal computers stay at home. Laptops occasionally come with us, but only with significant effort. (Carry bag, laptop, power adapter and cable, extra VGA cable, VGA-DVI adapter, and goodness-know-what.)
It's nice to visit a temple, but it's nicer to have a ready and capable servant.
Monday, May 6, 2013
A Risk of Big Data: Armchair Statisticians
In the mid-1980s, laser printers became affordable, word processor software became more capable, and many people found that they were able to publish their own documents. They proceeded to do so. Some showed restraint in the use of fonts; others created documents that were garish.
In the mid-1990s, web pages became affordable, web page design software became more capable, and many people found that they were able to create their own web sites. They proceeded to do so. Some showed restraint in the use of fonts, colors, and the blink tag; others created web sites that were hideous.
In the mid-2010s, storage became cheap, data became collectable, analysis tools became capable, and I suspect many people will find that they are able to collect and analyze large quantities of data. I further predict that many will do so. Some will show restraint in their analyses; others will collect some (almost) random data and create results that are less than correct.
The biggest risk of Big Data may be the amateur. Professional statisticians understand the data, understand the methods used to analyze the data, and understand the limits of those analyses. Armchair statisticians know enough to analysis the data but not enough to criticize the analysis. This is a problem because it is easy to mis-interpret the results.
Typical errors are:
It is easy to make these errors, which is why the professionals take such pains to evaluate their work. Note that none of these are obvious in the results.
When the cost of performing these analyses was high, only the professionals could play. The cost of such analyses is dropping, which means that amateurs can play. And their results will look (at first glance) just as pretty as the professionals.
In desktop publishing and web page design, it was easy to separate the professionals and the amateurs. The visual aspects of the finished product were obvious.
With big data, it is hard to separate the two. The visual aspects of the final product do not show the workmanship of the analysis. (They show the workmanship of the presentation tool.)
Be prepared for the coming flood of presentations. And be prepared to ask some hard questions about the data and the analyses. It is the only way you will be able to tell the wheat from the chaff.
In the mid-1990s, web pages became affordable, web page design software became more capable, and many people found that they were able to create their own web sites. They proceeded to do so. Some showed restraint in the use of fonts, colors, and the blink tag; others created web sites that were hideous.
In the mid-2010s, storage became cheap, data became collectable, analysis tools became capable, and I suspect many people will find that they are able to collect and analyze large quantities of data. I further predict that many will do so. Some will show restraint in their analyses; others will collect some (almost) random data and create results that are less than correct.
The biggest risk of Big Data may be the amateur. Professional statisticians understand the data, understand the methods used to analyze the data, and understand the limits of those analyses. Armchair statisticians know enough to analysis the data but not enough to criticize the analysis. This is a problem because it is easy to mis-interpret the results.
Typical errors are:
- Omitting relevant data (or including irrelevant data) due to incorrect "select" operations.
- Identifying correlation as causation. (In an economic downturn, the unemployment rate increases as does the payments for unemployment insurance. But the UI payments do not cause the UI rate; both are driven by the economy.)
- Identifying the reverse of a causal relationship (Umbrellas do not cause rain.)
- Improper summary operations (Such as calculating an average of a quantized value like processor speed. You most likely want either the median or the mode.)
It is easy to make these errors, which is why the professionals take such pains to evaluate their work. Note that none of these are obvious in the results.
When the cost of performing these analyses was high, only the professionals could play. The cost of such analyses is dropping, which means that amateurs can play. And their results will look (at first glance) just as pretty as the professionals.
In desktop publishing and web page design, it was easy to separate the professionals and the amateurs. The visual aspects of the finished product were obvious.
With big data, it is hard to separate the two. The visual aspects of the final product do not show the workmanship of the analysis. (They show the workmanship of the presentation tool.)
Be prepared for the coming flood of presentations. And be prepared to ask some hard questions about the data and the analyses. It is the only way you will be able to tell the wheat from the chaff.
Labels:
big data,
data analysis,
desktop publishing,
web page design
Thursday, May 2, 2013
Our fickleness on the important aspects of programs
Over time, we have changed our desire in program attributes. If we divide the IT age into four eras, we can see this change. Let's consider the four eras to be mainframe, PC, web, and mobile/cloud. These four eras used different technology and different languages, and praised different accomplishments.
In the mainframe era, we focussed on raw efficiency. We measured CPU usage, memory usage, and disk usage. We strove to have enough CPU, memory, and disk, with some to spare but not too much. Hardware was expensive, and too much spare capacity meant that you were paying for more than you needed.
In the PC era we focussed not on efficiency but on user-friendliness. We built applications with help screens and menus. We didn't care too much about efficiency -- many people left PCs powered on overnight, with no "jobs" running.
With web applications, we focussed on globalization, with efficiency as a sub-goal. The big effort was in the delivery of an application to a large quantity of users. This meant translation into multiple languages, the "internationalization" of an application, support for multiple browsers, and support for multiple time zones. But we didn't want to overload our servers, either, so early Perl CGI applications were quickly converted to C or other languages for performance.
With applications for mobile/cloud, we desire two aspects: For mobile apps (that is, the 'UI' portion), we want something easier than "user-friendly". The operation of an app must not merely be simple, it must be obvious. For cloud apps (that is, the server portion), we want scalability. An app must not be monolithic, but assembled from collaborative components.
The objectives for systems vary from era to era. Performance was a highly measured aspect in the mainframe era, and almost ignored in the PC era.
The shift from one era to another may be difficult for practitioners. Programmers in one era may be trained to "optimize" their code for the dominant aspect. (In the mainframe era, they would optimize for performance.) A succeeding era would demand other aspects in their systems, and programmers may not be aware of the change. Thus, a highly-praised mainframe programmer with excellent skills at algorithm design, when transferred to a PC project may find that his skills are not desired or recognized. His code may receive a poor review, since the expectation for PC systems is "user friendly" and his skills from mainframe programming do not provide that aspect.
Similarly, a skilled PC programmer may have difficulties when moving to web or mobile/cloud systems. The expectations for user interface, architecture, and efficiency are quite different.
Practitioners who start with a later era (for example, the 'young turks' starting with mobile/cloud) may find it difficult to comprehend the reasoning of programmers from an earlier era. Why do mainframe programmers care about the order of mathematical operations? Why do PC programmers care so much about in-memory data structures, to the point of writing their own?
The answers are that, at the time, these were important aspects of programs. They were pounded into the programmers of earlier eras, to a degree that those programmers design their code without thinking about these optimizations.
Experienced programmers must look at the new system designs and the context of those designs. Mobile/cloud needs scalability, and therefore needs collaborative components. The monolithic designs that optimized memory usage are unsuitable to the new environment. Experienced programmers must recognize their learned biases and discard those that are not useful in the new era. (Perhaps we can consider this a problem of cache invalidation.)
Younger programmers would benefit from a deeper understanding of the earlier eras. Art students learn study the conditions (and politics) of the old masters. Architects study the buildings of the Greeks, Romans, and medieval kingdoms. Programmers familiar with the latest era, and only the latest era, will have a difficult time communicating with programmers of earlier eras.
Each era has objectives and constraints. Learn about those objectives and constraints, and you will find a deeper appreciation of programs and a greater ability to communicate with other programmers.
In the mainframe era, we focussed on raw efficiency. We measured CPU usage, memory usage, and disk usage. We strove to have enough CPU, memory, and disk, with some to spare but not too much. Hardware was expensive, and too much spare capacity meant that you were paying for more than you needed.
With web applications, we focussed on globalization, with efficiency as a sub-goal. The big effort was in the delivery of an application to a large quantity of users. This meant translation into multiple languages, the "internationalization" of an application, support for multiple browsers, and support for multiple time zones. But we didn't want to overload our servers, either, so early Perl CGI applications were quickly converted to C or other languages for performance.
With applications for mobile/cloud, we desire two aspects: For mobile apps (that is, the 'UI' portion), we want something easier than "user-friendly". The operation of an app must not merely be simple, it must be obvious. For cloud apps (that is, the server portion), we want scalability. An app must not be monolithic, but assembled from collaborative components.
The objectives for systems vary from era to era. Performance was a highly measured aspect in the mainframe era, and almost ignored in the PC era.
The shift from one era to another may be difficult for practitioners. Programmers in one era may be trained to "optimize" their code for the dominant aspect. (In the mainframe era, they would optimize for performance.) A succeeding era would demand other aspects in their systems, and programmers may not be aware of the change. Thus, a highly-praised mainframe programmer with excellent skills at algorithm design, when transferred to a PC project may find that his skills are not desired or recognized. His code may receive a poor review, since the expectation for PC systems is "user friendly" and his skills from mainframe programming do not provide that aspect.
Similarly, a skilled PC programmer may have difficulties when moving to web or mobile/cloud systems. The expectations for user interface, architecture, and efficiency are quite different.
Practitioners who start with a later era (for example, the 'young turks' starting with mobile/cloud) may find it difficult to comprehend the reasoning of programmers from an earlier era. Why do mainframe programmers care about the order of mathematical operations? Why do PC programmers care so much about in-memory data structures, to the point of writing their own?
The answers are that, at the time, these were important aspects of programs. They were pounded into the programmers of earlier eras, to a degree that those programmers design their code without thinking about these optimizations.
Experienced programmers must look at the new system designs and the context of those designs. Mobile/cloud needs scalability, and therefore needs collaborative components. The monolithic designs that optimized memory usage are unsuitable to the new environment. Experienced programmers must recognize their learned biases and discard those that are not useful in the new era. (Perhaps we can consider this a problem of cache invalidation.)
Younger programmers would benefit from a deeper understanding of the earlier eras. Art students learn study the conditions (and politics) of the old masters. Architects study the buildings of the Greeks, Romans, and medieval kingdoms. Programmers familiar with the latest era, and only the latest era, will have a difficult time communicating with programmers of earlier eras.
Each era has objectives and constraints. Learn about those objectives and constraints, and you will find a deeper appreciation of programs and a greater ability to communicate with other programmers.
Subscribe to:
Posts (Atom)