As a new technology, tablets have a much harder job than PCs had.
When individuals and companies started using personal computers, there was either no established IT infrastructure, or the established infrastructure was separate from the realm in which PCs operated. For an individual, that realm was the home, and the PC was the first computing device. (It may have been something other than an IBM PC; perhaps a Commodore C-64 or a Radio Shack TRS-80. But it was the only computer in the house.)
Companies may have had mainframe computers, or timesharing services, or even minicomputers. Some may have had no computers. For companies with no computers, the PC was the first computer. For companies with larger, "real" computers, the PC occupied a different computing area.
Mainframes and minicomputers were used for financial applications. PCs were used, initially, as replacements for typewriters and word processing systems. Over time we expanded the role of the PC into a computing workstation, but they were still isolated from each other and processing data that was not on the mainframe.
PCs and their applications could grow without interference from the high priests of the mainframe. The mainframe programmers and system analysts were busy with "real" business applications and not concerned with fancy electric typewriters. (And as long as PCs were fancy electric typewriters, the mainframe programmers and analysts were right to ignore them.)
PC applications grew, in size and number. Eventually they started doing "real" work. And shortly after we used PCs to do real business work, we wanted to share data with other PCs and other systems -- such as those that ran on mainframes.
That desire lead to a large change in technology. We moved away from the mainframe model of central processing with simple terminals. We looked for ways to bridge the "islands of automation" that PCs made. We built networking for PCs, scavenging technologies and creating a few. We connected PCs to other PCs. We connected PCs to minicomputers. We connected PCs to mainframes.
Our connections were not limited to hardware and low-level file transfers. We wanted to connect a PC application to a mainframe application. We wanted to exchange information, despite PCs using ASCII and mainframes using EBCDIC.
After decades of research, experiment, and work, we arrived at our current model of connected computing. Today we use mostly PCs and servers, with mainframes performing key functions. We have the hardware and networks. We have character sets (UNICODE, usually) and protocols to exchange data reliably.
It is at this point that tablets arrive on the scene.
Where PCs had a wide open field with little oversight, tablets come to the table with a well-define infrastructure, technically and bureaucratically. The tablet does not replace a stand-alone device like a typewriter; it replaces (or complements) a connected PC. The tablet does not have new applications of its own; it performs the same functions as PCs.
In some ways, the existing infrastructure makes it easy for tablets to fit in. Our networking is reliable, flexible, and fast. Tablets can "plug in" to the network quickly and easily.
But tablets have a harder job than PCs. The "bureaucracy" ignored PCs when they arrived; it is not ignoring tablets. The established IT support groups define rules for tablets to follow, standards for them to meet. Even the purchasing groups are aware of tablets; one cannot sneak a tablet into an organization below the radar.
Another challenge is the connectedness of applications. Our systems talk to each other, sending and receiving data as they need it. Sometimes this is through plain files, somethings through e-mail, and sometimes directly. To be useful, tablets must send and receive data to those systems. They cannot be a stand-alone device. (To be fair, a stand-alone PC with no network connection would be a poor fit in today's organizations too.)
But the biggest challenge is probably our mindset. We think of tablets as small, thin, mouseless PCs, and that is a mistake. Tablets are small, they are thin, and they are mouseless. But they are not PCs.
PCs are much better for the composition of data, especially text. Tablets are better for the collection of certain types of data (photographs, location) and the presentation of data. These are two different spheres of automation.
We need new ideas for tablets, new approaches to computation and new expectations of systems. We need to experiment with tablets, to let these new ideas emerge and prove themselves. I fully expect that most new ideas will fail. A few will succeed.
Forcing tablets into the system designed for PCs will slow the experiments. Tablets must "be themselves". The challenge is to change our bureaucracy and let that happen.
Monday, May 26, 2014
Wednesday, May 21, 2014
Mobile apps don't need tech support
I recently "fired" the New York Times. By "fired", I mean that I cancelled my subscription. The reason? Their app did not work on my Android tablet.
The app should have worked. I have a modern Android tablet running Android 4.1 and Google Play services. (I can understand the app not working on a low-end Android tablet that is not certified for Google Play services. But this was not my situation. The app should have worked.)
The app failed -- consistently, and over a period of time. It wasn't a simple, one-time failure. (The app worked on my Android smart phone, so I know my account was in good standing.)
So I uninstalled the app and cancelled my subscription.
A few musings on software and tech support.
Cost A software package for Windows is an expensive proposition. Most software -- software that is sold -- costs upwards of hundreds of dollars. Most enterprise software has not only a purchase cost but a support cost -- an annual fee. With such an investment in the software, I will only discard the software when I must. I have a strong incentive to make it work.
The tech support experience Software companies have, over the years, trained us lowly users to expect an unpleasant experience with technical support. Software companies hide the phone number, pushing you to their web-based documentation and support forum. If you do manage to call them, you are greeted with a telephone "torture menu" and long wait times. (But with a significant investment in the software, we are willing to put up with it.)
The equation changes for mobile apps Mobile apps are much less expensive than PC applications. Many are free, some cost a few dollars.
Customer expectations may be shaped by many things. Software is not always an expensive proposition. Consider:
While I may be willing to wait on the phone for a (probably polite but ineffective) support technician when the software cost is high, I am not willing to invest that time for free or cheap software.
I suspect that there are several doctoral theses in this phenomenon, for students of either psychology or economics.
I decided rather quickly to drop the app for the New York Times. The failing app was a frustrating experience, and I wanted to move on to other things. I may have posted about it on Twitter; I dimly remember a response from the Times folks claiming that their support team could help me. But by the time I got that response, I had already fired them.
Regardless of the economics and the psychology, the message for app makers is clear: Your app must work, with a quality higher than most PC applications. It must work and be simple enough that a person can use it without a manual, without a help system, and without a technical support team.
If it doesn't, your customers will delete the app and walk away. Your revenue from app upgrades or connected services (in my case, the subscription to the New York Times) will drop. You do not get a chance to "save" a customer when they call for support -- they're not going to call.
The app should have worked. I have a modern Android tablet running Android 4.1 and Google Play services. (I can understand the app not working on a low-end Android tablet that is not certified for Google Play services. But this was not my situation. The app should have worked.)
The app failed -- consistently, and over a period of time. It wasn't a simple, one-time failure. (The app worked on my Android smart phone, so I know my account was in good standing.)
So I uninstalled the app and cancelled my subscription.
A few musings on software and tech support.
Cost A software package for Windows is an expensive proposition. Most software -- software that is sold -- costs upwards of hundreds of dollars. Most enterprise software has not only a purchase cost but a support cost -- an annual fee. With such an investment in the software, I will only discard the software when I must. I have a strong incentive to make it work.
The tech support experience Software companies have, over the years, trained us lowly users to expect an unpleasant experience with technical support. Software companies hide the phone number, pushing you to their web-based documentation and support forum. If you do manage to call them, you are greeted with a telephone "torture menu" and long wait times. (But with a significant investment in the software, we are willing to put up with it.)
The equation changes for mobile apps Mobile apps are much less expensive than PC applications. Many are free, some cost a few dollars.
Customer expectations may be shaped by many things. Software is not always an expensive proposition. Consider:
- The (relatively) low cost for Mac OS
- The appearance of Linux (a free operating system)
- Open source applications (also free)
- Free apps for your smartphone
- Free e-mail from Google and Yahoo
- Unlimited voice and text with your cell phone plan.
While I may be willing to wait on the phone for a (probably polite but ineffective) support technician when the software cost is high, I am not willing to invest that time for free or cheap software.
I suspect that there are several doctoral theses in this phenomenon, for students of either psychology or economics.
I decided rather quickly to drop the app for the New York Times. The failing app was a frustrating experience, and I wanted to move on to other things. I may have posted about it on Twitter; I dimly remember a response from the Times folks claiming that their support team could help me. But by the time I got that response, I had already fired them.
Regardless of the economics and the psychology, the message for app makers is clear: Your app must work, with a quality higher than most PC applications. It must work and be simple enough that a person can use it without a manual, without a help system, and without a technical support team.
If it doesn't, your customers will delete the app and walk away. Your revenue from app upgrades or connected services (in my case, the subscription to the New York Times) will drop. You do not get a chance to "save" a customer when they call for support -- they're not going to call.
Labels:
app quality,
economics,
mobile apps,
psychology,
tech support,
user experience
Monday, May 19, 2014
The shift to cloud is bigger than we think
We've been using operating systems for decades. While they have changed over the years, they have offered a consistent set of features: time-slicing of the processor, memory allocation and management, device control, file systems, and interrupt handling.
Our programs ran "under" (or "on top of") an operating system. Our programs were also fussy -- they would run on one operating system and only that operating system. (I'm ignoring the various emulators that have come and gone over time.)
The operating system was the "target", it was the "core", it was the sun around which our programs orbited.
So it is rather interesting that the shift to cloud computing is also a shift away from operating systems.
Not that cloud computing is doing away with operating systems. Cloud computing coordinates the activities of multiple, usually virtualized, systems, and those systems run operating systems. What changes in cloud computing is the programming target.
Instead of a single computer, a cloud system is composed of multiple systems: web servers, database servers, and message queues, typically. While those servers and queues must run on computers (with operating systems), we don't care about them. We don't insist that they run any specific operating system (or even use a specific processor). We care only that they provide the necessary services.
In cloud computing, the notion of "operating system" fades into the infrastructure.
As cloud programmers, we don't care if our web server is running Windows. Nor do we care if it is running Linux. (The system administrators do care, but I am taking a programmer-centric view.) We don't care which operating system manages our message queues.
The level of abstraction for programmers has moved from operating system to web services.
That is a significant change. It means that programmers can focus on a higher level of work.
Hardware-tuned programming languages like C and C++ will become less important. Not completely forgotten, but used only by the specialists. Languages such as Python, Ruby, and Java will be popular.
Operating systems will be less important. They will be ignored by the application level programmers. The system architects and sysadmins, who design and maintain the cloud infrastructure, will care a lot about operating systems. But they will be a minority.
The change to services is perhaps not surprising. We long ago shifted away from processor-specific code, burying they work in our compilers. COBOL and FORTRAN, the earliest languages, were designed to run on different processors. Microsoft insulated us from the Windows API with MFC and later the .NET framework. Java separated us from the processor with its virtual machine. Now we take the next step and bury the operating system inside of web services.
Operating systems won't go away. But they will become less visible, less important in conversations and strategic plans. They will be more of a commodity and less of a strategic advantage.
Our programs ran "under" (or "on top of") an operating system. Our programs were also fussy -- they would run on one operating system and only that operating system. (I'm ignoring the various emulators that have come and gone over time.)
The operating system was the "target", it was the "core", it was the sun around which our programs orbited.
So it is rather interesting that the shift to cloud computing is also a shift away from operating systems.
Not that cloud computing is doing away with operating systems. Cloud computing coordinates the activities of multiple, usually virtualized, systems, and those systems run operating systems. What changes in cloud computing is the programming target.
Instead of a single computer, a cloud system is composed of multiple systems: web servers, database servers, and message queues, typically. While those servers and queues must run on computers (with operating systems), we don't care about them. We don't insist that they run any specific operating system (or even use a specific processor). We care only that they provide the necessary services.
In cloud computing, the notion of "operating system" fades into the infrastructure.
As cloud programmers, we don't care if our web server is running Windows. Nor do we care if it is running Linux. (The system administrators do care, but I am taking a programmer-centric view.) We don't care which operating system manages our message queues.
The level of abstraction for programmers has moved from operating system to web services.
That is a significant change. It means that programmers can focus on a higher level of work.
Hardware-tuned programming languages like C and C++ will become less important. Not completely forgotten, but used only by the specialists. Languages such as Python, Ruby, and Java will be popular.
Operating systems will be less important. They will be ignored by the application level programmers. The system architects and sysadmins, who design and maintain the cloud infrastructure, will care a lot about operating systems. But they will be a minority.
The change to services is perhaps not surprising. We long ago shifted away from processor-specific code, burying they work in our compilers. COBOL and FORTRAN, the earliest languages, were designed to run on different processors. Microsoft insulated us from the Windows API with MFC and later the .NET framework. Java separated us from the processor with its virtual machine. Now we take the next step and bury the operating system inside of web services.
Operating systems won't go away. But they will become less visible, less important in conversations and strategic plans. They will be more of a commodity and less of a strategic advantage.
Labels:
C,
C++,
cloud computing,
operating systems,
programming,
python,
ruby
Sunday, May 18, 2014
How to untangle code: Use variables for only one purpose
Early languages (COBOL, FORTRAN, BASIC, Pascal, C, and others) forced the declarations of variables into a single section of code. COBOL was the strictest of this taskmaster, with data declarations in sections of code separate from the procedural section.
With limited memory, it was often necessary to re-use variables. FORTRAN assisted in the efficient use of memory with the 'EQUIVALENCE' directive which let one specify variables that used the same memory locations.
Today, the situation has changed. Memory is cheap and plentiful. It is no longer necessary to use variables for more than one purpose. Our languages no longer have EQUIVALENCE statements -- something for which I am very grateful. Modern languages (including C++, C#, Java, Perl, Python, Ruby, and even the later versions of C) allow us to declare variables when we need them; we are not limited to declaring them in a specific location.
Using variables for more than one purpose is still tempting, but not necessary. Modern languages allow us to declare variables as we need them, and use different variables for different purposes.
Suppose we have code that calculates the total expenses and total revenue in a system.
Instead of this code:
void calc_total_expense_and_revenue()
{
int i;
double amount;
amount = 0;
for (i = 0; i < 10; i++)
{
amount += calc_expense(i);
}
store_expense(amount);
amount = 0;
for (i = 0; i < 10; i++)
{
amount += calc_revenue(i);
}
store_revenue(amount);
}
we can use this code:
void calc_total_expense_and_revenue()
{
double expense_amount = 0;
for (unsigned int i = 0; i < 10; i++)
{
expense_amount += calc_expense(i);
}
store_expense(expense_amount);
double revenue_amount = 0;
for (unsigned int i = 0; i < 10; i++)
{
revenue_amount += calc_revenue(i);
}
store_revenue(revenue_amount);
}
I much prefer the second version. Why, because the second version cleanly separates the calculation of expenses and revenue. In fact, the separation is so good we can break the function into two smaller functions:
void calc_total_expense()
{
double expense_amount = 0;
for (unsigned int i = 0; i < 10; i++)
{
expense_amount += calc_expense(i);
}
store_expense(expense_amount);
}
void calc_total_revenue()
{
double revenue_amount = 0;
for (unsigned int i = 0; i < 10; i++)
{
revenue_amount += calc_revenue(i);
}
store_revenue(revenue_amount);
}
Two small functions are better than one large function. Small functions are easier to read and easier to maintain. Using a variable for more than one purpose can tie those functions together. Using separate variables (or one variable for each purpose) allows us to separate functions.
With limited memory, it was often necessary to re-use variables. FORTRAN assisted in the efficient use of memory with the 'EQUIVALENCE' directive which let one specify variables that used the same memory locations.
Today, the situation has changed. Memory is cheap and plentiful. It is no longer necessary to use variables for more than one purpose. Our languages no longer have EQUIVALENCE statements -- something for which I am very grateful. Modern languages (including C++, C#, Java, Perl, Python, Ruby, and even the later versions of C) allow us to declare variables when we need them; we are not limited to declaring them in a specific location.
Using variables for more than one purpose is still tempting, but not necessary. Modern languages allow us to declare variables as we need them, and use different variables for different purposes.
Suppose we have code that calculates the total expenses and total revenue in a system.
Instead of this code:
void calc_total_expense_and_revenue()
{
int i;
double amount;
amount = 0;
for (i = 0; i < 10; i++)
{
amount += calc_expense(i);
}
store_expense(amount);
amount = 0;
for (i = 0; i < 10; i++)
{
amount += calc_revenue(i);
}
store_revenue(amount);
}
we can use this code:
void calc_total_expense_and_revenue()
{
double expense_amount = 0;
for (unsigned int i = 0; i < 10; i++)
{
expense_amount += calc_expense(i);
}
store_expense(expense_amount);
double revenue_amount = 0;
for (unsigned int i = 0; i < 10; i++)
{
revenue_amount += calc_revenue(i);
}
store_revenue(revenue_amount);
}
void calc_total_expense()
{
double expense_amount = 0;
for (unsigned int i = 0; i < 10; i++)
{
expense_amount += calc_expense(i);
}
store_expense(expense_amount);
}
void calc_total_revenue()
{
double revenue_amount = 0;
for (unsigned int i = 0; i < 10; i++)
{
revenue_amount += calc_revenue(i);
}
store_revenue(revenue_amount);
}
Thursday, May 15, 2014
The cloud requires change, and risk
If you want to use the cloud, your systems must change. Cloud-based systems will either be new (designed for the cloud) or converted (designed for another platform and then re-designed for the cloud).
If you are expecting all of your cloud systems to work, as expected, without problems, from their first day of operation, you are in for a great disappointment. Cloud-based systems -- real cloud-based systems, not just running a web application on a virtualized server -- are built with distributed services and message queues, not just the traditional application and database.
Cloud computing is a new technology. We're still learning how to use it. The technology "stack" is evolving. Even the folks with experience are still learning about the changes. (In contrast, the technology stacks for Microsoft Windows desktop applications and Java web applications are stable and well-known.)
Since we're learning, we're going to make mistakes. Our designs will be inefficient -- or maybe fail completely. We may encounter problems early on, or late in testing, or even in production. We may find problems with our design when the application runs under heavy load.
Eventually, after enough designs are tried, we will find those that work. (Just as we found the proper designs for Windows programs and client-server systems and web applications.) Once we have the "best practices" for cloud systems, it will be easy (well, relatively easy) to build cloud-based systems.
If you are comfortable with some risk, with some degree of failure, then go ahead and build systems with cloud computing. Try various designs. Learn how the technology works. Revise your design as the technology changes. Be prepared for problems.
If you want to avoid problems, if you want to avoid risk, then wait for the best practices. That means waiting for others to experiment, pioneer, make mistakes, and show us the way.
The second strategy means waiting. It means not being a technology leader. And it means possibly letting a competitor gain the lead.
That's the price for avoiding risk.
If you are expecting all of your cloud systems to work, as expected, without problems, from their first day of operation, you are in for a great disappointment. Cloud-based systems -- real cloud-based systems, not just running a web application on a virtualized server -- are built with distributed services and message queues, not just the traditional application and database.
Cloud computing is a new technology. We're still learning how to use it. The technology "stack" is evolving. Even the folks with experience are still learning about the changes. (In contrast, the technology stacks for Microsoft Windows desktop applications and Java web applications are stable and well-known.)
Since we're learning, we're going to make mistakes. Our designs will be inefficient -- or maybe fail completely. We may encounter problems early on, or late in testing, or even in production. We may find problems with our design when the application runs under heavy load.
Eventually, after enough designs are tried, we will find those that work. (Just as we found the proper designs for Windows programs and client-server systems and web applications.) Once we have the "best practices" for cloud systems, it will be easy (well, relatively easy) to build cloud-based systems.
If you are comfortable with some risk, with some degree of failure, then go ahead and build systems with cloud computing. Try various designs. Learn how the technology works. Revise your design as the technology changes. Be prepared for problems.
If you want to avoid problems, if you want to avoid risk, then wait for the best practices. That means waiting for others to experiment, pioneer, make mistakes, and show us the way.
The second strategy means waiting. It means not being a technology leader. And it means possibly letting a competitor gain the lead.
That's the price for avoiding risk.
Wednesday, May 14, 2014
Cloud computing is good for some apps, but not all
The rush to cloud-based systems has clouded (if you forgive the pun) the judgement of some. Cloud computing is the current shiny new technology, and there is a temptation to move everything to it. But should we?
We can get a better idea of migration strategies by looking at previous generations of new technology. Cloud computing is the latest of a series of technology advances. In each case, the major applications stayed on their technology platform, and the new technology offered new applications.
When PCs arrived, the big mainframe applications (financial applications like general ledger, payroll, inventory, and billing) stayed on mainframes. The applications on PCs were word processing and spreadsheets. Games, too. Later desktop publishing, e-mail, and presentations emerged. All of these applications were specific to PCs. While traditional mainframe applications were written for PCs, they saw little popularity.
When the web arrived, the popular PC applications (word processing, etc.) stayed on PCs. The applications on the web were static web pages and e-commerce. Later, blogging and music sharing apps (Napster) joined the scene.
When smartphones and tablets arrived, Facebook and Twitter jumped from the web to them (sometimes apps do move from one platform to another) but most applications stayed on the web. The popular apps for phones (after Facebook and Twitter) include photography, maps and location (GPS), texting, music, and games.
The pattern is clear: a new technology allows for new types of applications and old applications tend to stay on their original platforms. Sometimes an application will move to a new platform; I suspect that most applications, developed on a platform, are particularly well-suited to that platform. (A form of evolution and specialization, perhaps.)
The analogy to evolution is perhaps not all that inappropriate. New technologies do kill off older technologies. PCs and word processing software killed off typewriters and dedicated word processing systems. PCs and networks killed off minicomputers. Cell phone networks are slowing killing wired telephony.
What does all of this tell us about cloud computing?
There is a lot of interest in cloud computing, and there should be. It is a new model of computing, one that offers reliability, modular system design, and the ability to scale. Forward-looking individuals and organizations are experimenting with it and learning about it. They are running pilot projects, some which succeed and some which fail.
Some among us will try to move everything to the cloud. Others will resist cloud computing. Both extremes are futile. Some applications should remain on the web (or even on PCs). Some applications will do well in the cloud.
Let's move forward and learn!
We can get a better idea of migration strategies by looking at previous generations of new technology. Cloud computing is the latest of a series of technology advances. In each case, the major applications stayed on their technology platform, and the new technology offered new applications.
When PCs arrived, the big mainframe applications (financial applications like general ledger, payroll, inventory, and billing) stayed on mainframes. The applications on PCs were word processing and spreadsheets. Games, too. Later desktop publishing, e-mail, and presentations emerged. All of these applications were specific to PCs. While traditional mainframe applications were written for PCs, they saw little popularity.
When the web arrived, the popular PC applications (word processing, etc.) stayed on PCs. The applications on the web were static web pages and e-commerce. Later, blogging and music sharing apps (Napster) joined the scene.
When smartphones and tablets arrived, Facebook and Twitter jumped from the web to them (sometimes apps do move from one platform to another) but most applications stayed on the web. The popular apps for phones (after Facebook and Twitter) include photography, maps and location (GPS), texting, music, and games.
The pattern is clear: a new technology allows for new types of applications and old applications tend to stay on their original platforms. Sometimes an application will move to a new platform; I suspect that most applications, developed on a platform, are particularly well-suited to that platform. (A form of evolution and specialization, perhaps.)
The analogy to evolution is perhaps not all that inappropriate. New technologies do kill off older technologies. PCs and word processing software killed off typewriters and dedicated word processing systems. PCs and networks killed off minicomputers. Cell phone networks are slowing killing wired telephony.
What does all of this tell us about cloud computing?
There is a lot of interest in cloud computing, and there should be. It is a new model of computing, one that offers reliability, modular system design, and the ability to scale. Forward-looking individuals and organizations are experimenting with it and learning about it. They are running pilot projects, some which succeed and some which fail.
Some among us will try to move everything to the cloud. Others will resist cloud computing. Both extremes are futile. Some applications should remain on the web (or even on PCs). Some applications will do well in the cloud.
Let's move forward and learn!
Subscribe to:
Posts (Atom)