People have noticed that apps for the iPhone (and Android phones) are easier to use than the standard PC applications. They are easier to install, easier to run, and they tend to run without crashing.
Why are the two so different? I think that the answer started in marketing.
PCs and cell phones are advertised differently. Personal computers are sold on power; cell phones are sold on convenience.
In 1981, the IBM PC entered a market that already had other computers, and the vast majority of users were men. The IBM PC competed on power: a faster and more capable CPU, more memory, more pixels on the screen, more sales offices, ... you get the idea.
Software for the IBM PC competed on the same basis. Word processors advertised the number of fonts, spreadsheet advertised the number of cells, compilers advertised the number of features. Advertising in the PC age (for hardware and software) was a game of "mine is bigger than yours".
The Apple iPhone, on the other hand, entered a market devoid of direct competition. Yes, there were other cell phones, but the iPhone set a new standard for cell phones. Whereas the IBM PC was perhaps thirty percent better than the Apple II and TRS-80 computers, the iPhone was more than one hundred percent better. It was a new creature.
I'm sure that Apple knew that the market for iPhones would include women as well as men. That may have pushed them to market convenience. Or not; I don't know their motivation. it is a difference that we can analyze for a long time.
The bottom line is that the iPhone was sold on the basis of "easy to use" and "coolness". It was not sold on "bigger and therefore better".
It's relatively easy to deliver "bigger and better". (Not completely easy, as IBM and later Microsoft learned from the need for backwards-compatibility, but possible.) Delivering "coolness" is hard. "Coolness" has a psychological component that is absent from the pure "big hardware" solution.
The coolness aspect is what Microsoft products need. Microsoft has improved Windows, designed game systems, and built cell phones and music players, but all within the mindset of "bigger than the other guy". Even their Visual Studio IDE is bigger and more capable than Eclipse and the other IDEs, but not cooler. It's just bigger.
If Microsoft wants to compete in the new age, they must learn coolness. They must forget "mine is bigger than yours" and make products truly desirable.
Come to think of it, if *you* want to compete, you must learn coolness.
Saturday, February 26, 2011
Wednesday, February 23, 2011
CPU time rides again!
A long time ago, when computers were large, hulking beasts (and I mean truly large, hulking beasts, the types that filled rooms), there was the notion of "CPU time". Not only was there "CPU time", but there was a cost associated with CPU usage. In dollars.
CPU time was expensive and computations were precious. So expensive and so precious, in fact, that early IBM programmers were taught that when performing a "multiply" operation, one should load registers with the larger number in one particular register and the smaller number in a different register. While the operations "3 times 5" and "5 times 3" yield the same results, the early processors did not consider them identical. The multiplication operation was a series of add operations, and "3 times 5" was performed as five "add" operations, while "5 times 3" was performed as three "add" operations. The difference was two "add" operations. Not much, but the difference was larger for larger numbers. Repeated through the program, the total difference was significant. (That is, measurable in dollars.)
Advances in technology and the PC changed that mindset. Personal computers didn't have the notion of "CPU time". In part because the hardware didn't support the capture of CPU time, but also because the user didn't care. People cared about getting the job done, not about minimizing CPU time and maximizing the number of jobs run. There was only one job the user (who was also the system administrator) cared about -- the program that they were running.
For the past thirty years, people have not known or cared about CPU usage and program efficiency. I should rephrase that to "people in the PC/DOS/Windows world". Folks in the web world have cared about performance and still care about performance. But let's focus on the PC folks.
The PC folks have had a free ride for the past three decades, not worrying about performance. Oh, a few folks have worried: developers from the "old world" who learned frugality and programmers with really large data processing needs. But the vast majority of PC users have gotten by with the attitude of "if the program is slow, buy a faster PC".
This attitude is in for a change. The cause of the change? Virtualization.
With virtualization, PCs cease to be stand-alone machines. They become an "image" running under a virtualization engine. (That engine could be Virtual PC, VMWare, Virtualbox, Xen, or a few others. The engine doesn't matter; this issue applies to all of them.)
By shifting from a stand-alone machine to a job in a virtualization host, the PC becomes a job in a datacenter. It also becomes someone else's headache. The PC user is no longer the administrator. (Actually, the role of administrator in corporations shifted long ago, with Windows NT, domain controllers, centralized authentication, and group policies. Virtualization shifts the burden of CPU management to the central support team.)
The system administrators for virtualized PCs are true administrators, not PC owners who have the role thrust upon them. Real sysadmins pay attention to lots of performance indicators, including CPU usage, disk activity, and network activity. They pay attention because the operations cost money.
With virtual PCs, the processing occurs in the datacenter, and sysadmins will quickly spot the inefficient applications. The programs that consume lots of CPU and I/O will make themselves known, by standing out from the others.
Here's what I see happening:
- The shift to virtual PCs will continue, with today's PC users migrating to low-cost PCs and using Remote Desktop Connection (for windows) and Virtual Network Computing (for Linux) to connect to virtualized hosts. Users will keep their current applications.
- Some applications will exhibit poor response through RDP and VNC. These will be the applications with poorly written GUI routines, programs that require the virtualization software to perform extra work to make them work.
- Users will complain to the system administrators, who will tweak settings but in general be unable to fix the problem.
- Some applications will consume lots of CPU or I/O operations. System administrators will identify them and ask users to fix their applications. Users (for the most part) will have no clue about performance of their applications, either because they were written by someone else or because the user has no experience with performance programming.
- At this point, most folks (users and sysadmins) are frustrated with the changes enforced by management and the lack of fixes for performance issues. But folks will carry on.
- System administrators will provide reports on resource usage. Reports will be broken down by subunits within the organization, and show the cost of resources consumed by each subgroup.
- Some shops will introduce charge-back systems, to allocate usage charges to organization groups. The charged groups may ignore the charges at first, or consider them an uncontrollable cost of business. I expect pressure to reduce expenses will get managers looking at costs.
- Eventually, someone will observe that application Y performs well under virtualization (that is, more cheaply) while application X does not. Applications X and Y provide the same functions (say, word processing) and are mostly equivalent.
- Once the system administrators learn about the performance difference, they will push for the more efficient application. Armed with statistics and cost figures, they will be in a good position to advocate the adoption of application Y as an organization standard.
- User teams and managers will be willing to adopt the proposed application, to reduce their monthly charges.
And over time, the market will reward those applications that perform well under virtualization. Notice that this change occurs without marketing. It also forces the trade-off of features against performance, something that has been absent from the PC world.
Your job, if you are building applications, is to build the 'Y' version. You want an application that wins on performance. You do not want the 'X' version.
You have to measure your application and learn how to write programs that are efficient. You need the tools to measure your application's performance, environments in which to test, and the desire to run these tests and improve your application. You will have a new set of requirements for your application: performance requirements. All while meeting the same (unreduced) set of functional requirements.
Remember, "3 times 5" is not the same as "5 times 3".
CPU time was expensive and computations were precious. So expensive and so precious, in fact, that early IBM programmers were taught that when performing a "multiply" operation, one should load registers with the larger number in one particular register and the smaller number in a different register. While the operations "3 times 5" and "5 times 3" yield the same results, the early processors did not consider them identical. The multiplication operation was a series of add operations, and "3 times 5" was performed as five "add" operations, while "5 times 3" was performed as three "add" operations. The difference was two "add" operations. Not much, but the difference was larger for larger numbers. Repeated through the program, the total difference was significant. (That is, measurable in dollars.)
Advances in technology and the PC changed that mindset. Personal computers didn't have the notion of "CPU time". In part because the hardware didn't support the capture of CPU time, but also because the user didn't care. People cared about getting the job done, not about minimizing CPU time and maximizing the number of jobs run. There was only one job the user (who was also the system administrator) cared about -- the program that they were running.
For the past thirty years, people have not known or cared about CPU usage and program efficiency. I should rephrase that to "people in the PC/DOS/Windows world". Folks in the web world have cared about performance and still care about performance. But let's focus on the PC folks.
The PC folks have had a free ride for the past three decades, not worrying about performance. Oh, a few folks have worried: developers from the "old world" who learned frugality and programmers with really large data processing needs. But the vast majority of PC users have gotten by with the attitude of "if the program is slow, buy a faster PC".
This attitude is in for a change. The cause of the change? Virtualization.
With virtualization, PCs cease to be stand-alone machines. They become an "image" running under a virtualization engine. (That engine could be Virtual PC, VMWare, Virtualbox, Xen, or a few others. The engine doesn't matter; this issue applies to all of them.)
By shifting from a stand-alone machine to a job in a virtualization host, the PC becomes a job in a datacenter. It also becomes someone else's headache. The PC user is no longer the administrator. (Actually, the role of administrator in corporations shifted long ago, with Windows NT, domain controllers, centralized authentication, and group policies. Virtualization shifts the burden of CPU management to the central support team.)
The system administrators for virtualized PCs are true administrators, not PC owners who have the role thrust upon them. Real sysadmins pay attention to lots of performance indicators, including CPU usage, disk activity, and network activity. They pay attention because the operations cost money.
With virtual PCs, the processing occurs in the datacenter, and sysadmins will quickly spot the inefficient applications. The programs that consume lots of CPU and I/O will make themselves known, by standing out from the others.
Here's what I see happening:
- The shift to virtual PCs will continue, with today's PC users migrating to low-cost PCs and using Remote Desktop Connection (for windows) and Virtual Network Computing (for Linux) to connect to virtualized hosts. Users will keep their current applications.
- Some applications will exhibit poor response through RDP and VNC. These will be the applications with poorly written GUI routines, programs that require the virtualization software to perform extra work to make them work.
- Users will complain to the system administrators, who will tweak settings but in general be unable to fix the problem.
- Some applications will consume lots of CPU or I/O operations. System administrators will identify them and ask users to fix their applications. Users (for the most part) will have no clue about performance of their applications, either because they were written by someone else or because the user has no experience with performance programming.
- At this point, most folks (users and sysadmins) are frustrated with the changes enforced by management and the lack of fixes for performance issues. But folks will carry on.
- System administrators will provide reports on resource usage. Reports will be broken down by subunits within the organization, and show the cost of resources consumed by each subgroup.
- Some shops will introduce charge-back systems, to allocate usage charges to organization groups. The charged groups may ignore the charges at first, or consider them an uncontrollable cost of business. I expect pressure to reduce expenses will get managers looking at costs.
- Eventually, someone will observe that application Y performs well under virtualization (that is, more cheaply) while application X does not. Applications X and Y provide the same functions (say, word processing) and are mostly equivalent.
- Once the system administrators learn about the performance difference, they will push for the more efficient application. Armed with statistics and cost figures, they will be in a good position to advocate the adoption of application Y as an organization standard.
- User teams and managers will be willing to adopt the proposed application, to reduce their monthly charges.
And over time, the market will reward those applications that perform well under virtualization. Notice that this change occurs without marketing. It also forces the trade-off of features against performance, something that has been absent from the PC world.
Your job, if you are building applications, is to build the 'Y' version. You want an application that wins on performance. You do not want the 'X' version.
You have to measure your application and learn how to write programs that are efficient. You need the tools to measure your application's performance, environments in which to test, and the desire to run these tests and improve your application. You will have a new set of requirements for your application: performance requirements. All while meeting the same (unreduced) set of functional requirements.
Remember, "3 times 5" is not the same as "5 times 3".
Sunday, February 20, 2011
Whining for SQL
Several folks have been whining (and I do mean whining) about the lack of SQL in cloud computing. The common arguments are that SQL is the standard for database access, that it is familiar (meaning that a lot of people have learned it), that it is needed to build applications effectively.
To these arguments, I say "bunk".
SQL has a limited history in the computing age. It become popular in the 1980s. prior to then, we got along without SQL quite well. SQL is not needed to access data or build applications effectively.
I will pause here to disclose my bias: I dislike SQL. I think that the language is ugly, and I don't like ugly languages.
As I see it, SQL was the result of two forces. One was the popularity of relational databases, which in turn were driven by a desire to reduce redundant data. The second force was the desire to divide the work of application development cleanly between database design and application design. I'm not sure that either of these forces applies in today's world of technology.
Cloud applications may not need SQL at all. We may be able to create new methods of accessing data for cloud applications. (And we seem to be well on our way doing so.) Insisting that cloud apps use SQL is a misguided attempt at keeping an old (and ugly ... did I mention ugly?) data access mechanism. Similar thinking was common in the early days of microcomputers (the pre-IBM PC days) when people strove to implement FORTRAN and COBOL compilers on microcomputers and build systems for general ledger and inventory.
Google has no incentive to bring SQL to the cloud. Nor do Amazon.com and Salesforce.com. The players who do have incentive for SQL in the cloud are the vendors selling SQL databases: Microsoft and Oracle. I expect that they will find a way -- some way, any way -- to use SQL in cloud apps. And I expect those ways to be expensive.
But despite the efforts of Microsoft and Oracle, I expect cloud apps to thrive without SQL.
To these arguments, I say "bunk".
SQL has a limited history in the computing age. It become popular in the 1980s. prior to then, we got along without SQL quite well. SQL is not needed to access data or build applications effectively.
I will pause here to disclose my bias: I dislike SQL. I think that the language is ugly, and I don't like ugly languages.
As I see it, SQL was the result of two forces. One was the popularity of relational databases, which in turn were driven by a desire to reduce redundant data. The second force was the desire to divide the work of application development cleanly between database design and application design. I'm not sure that either of these forces applies in today's world of technology.
Cloud applications may not need SQL at all. We may be able to create new methods of accessing data for cloud applications. (And we seem to be well on our way doing so.) Insisting that cloud apps use SQL is a misguided attempt at keeping an old (and ugly ... did I mention ugly?) data access mechanism. Similar thinking was common in the early days of microcomputers (the pre-IBM PC days) when people strove to implement FORTRAN and COBOL compilers on microcomputers and build systems for general ledger and inventory.
Google has no incentive to bring SQL to the cloud. Nor do Amazon.com and Salesforce.com. The players who do have incentive for SQL in the cloud are the vendors selling SQL databases: Microsoft and Oracle. I expect that they will find a way -- some way, any way -- to use SQL in cloud apps. And I expect those ways to be expensive.
But despite the efforts of Microsoft and Oracle, I expect cloud apps to thrive without SQL.
Wednesday, February 16, 2011
Should managers read code?
A goal of the COBOL programming language was reducing the effort of coding to the point that managers could write their own programs. Perhaps this was a disguised form of a different goal -- the elimination of programmers entirely. Whatever the motivation, COBOL failed to deliver the result of "managers write code". All subsequent efforts (report writers, fourth-generation languages, natural language query systems, etc.) have also failed to make programming easy enough for managers.
After failing to make managers write code (or code writable by managers), the industry moved to a new model, one that separated managers and programmers into distinct manager and worker roles. With this separation, managers no longer wrote code or even looked at code, but instead performed project management tasks. This arrangement puts managers in the position of hiring, managing, and assessing programmers without direct contact of their work.
Perhaps the complete aversion to code is inappropriate. Perhaps managers needn't write code but can still review code, if the code is readable.
I will admit that most code is unreadable -- by programmers as well as managers. And I will admit that object-oriented code, if well written, is easier to understand than procedural code. (For any but the most trivial of tasks.) What would it take for managers to read and understand code?
I believe that the code must be in a language with minimal administrative code and readable syntax. The code must be tied to business concepts, not programming syntax.
In other words, something other than C++.
The code must be well-written object-oriented code, in either Java or C# (or possibly Python or Ruby). The code must be written to be read, thus have meaningful names for variables and classes and methods. The code must be organized into comprehensible chunks -- no thousand-line methods or hundred-method classes.
I see benefits to manager-readable code. First, managers will be able to comprehend the code and verify that the code is performing the work as expected. Second, programmers will have an easier time of fixing defects, since the code will match the real world data entities. (This was a highly touted attribute of object-oriented code.) A possible third benefit is for managers to understand the difficulties of fixing some defects, as they can see how the code does not "bend" in the desired direction. But this perhaps depends more on a manager willing to accept that the effort is hard, and not so much the ability to understand the problem.
Beyond those advantages, an organization can benefit in other ways. The organization can open the code to other teams -- other development teams, support teams, and testing teams. These teams can learn from the code and better understand failures. They can also comment on the code and provide additional insights and improvements. In a limited way, the code becomes "open source" -- but only within the organization.
If those advantages aren't enough, the other advantage I see is for newcomers to the team. They can learn the business by reading the code. A new member of the programming team has to learn many things beyond the location of the rest rooms and the cafeteria, and time spent learning business rules can be reduced by making the business rules visible in the code.
So I think that managers shouldn't be forced to write code, but we should strive to write code that can be read by managers. Readable code is understandable code.
After failing to make managers write code (or code writable by managers), the industry moved to a new model, one that separated managers and programmers into distinct manager and worker roles. With this separation, managers no longer wrote code or even looked at code, but instead performed project management tasks. This arrangement puts managers in the position of hiring, managing, and assessing programmers without direct contact of their work.
Perhaps the complete aversion to code is inappropriate. Perhaps managers needn't write code but can still review code, if the code is readable.
I will admit that most code is unreadable -- by programmers as well as managers. And I will admit that object-oriented code, if well written, is easier to understand than procedural code. (For any but the most trivial of tasks.) What would it take for managers to read and understand code?
I believe that the code must be in a language with minimal administrative code and readable syntax. The code must be tied to business concepts, not programming syntax.
In other words, something other than C++.
The code must be well-written object-oriented code, in either Java or C# (or possibly Python or Ruby). The code must be written to be read, thus have meaningful names for variables and classes and methods. The code must be organized into comprehensible chunks -- no thousand-line methods or hundred-method classes.
I see benefits to manager-readable code. First, managers will be able to comprehend the code and verify that the code is performing the work as expected. Second, programmers will have an easier time of fixing defects, since the code will match the real world data entities. (This was a highly touted attribute of object-oriented code.) A possible third benefit is for managers to understand the difficulties of fixing some defects, as they can see how the code does not "bend" in the desired direction. But this perhaps depends more on a manager willing to accept that the effort is hard, and not so much the ability to understand the problem.
Beyond those advantages, an organization can benefit in other ways. The organization can open the code to other teams -- other development teams, support teams, and testing teams. These teams can learn from the code and better understand failures. They can also comment on the code and provide additional insights and improvements. In a limited way, the code becomes "open source" -- but only within the organization.
If those advantages aren't enough, the other advantage I see is for newcomers to the team. They can learn the business by reading the code. A new member of the programming team has to learn many things beyond the location of the rest rooms and the cafeteria, and time spent learning business rules can be reduced by making the business rules visible in the code.
So I think that managers shouldn't be forced to write code, but we should strive to write code that can be read by managers. Readable code is understandable code.
Tuesday, February 8, 2011
Back around again
A long time ago, when laptop computers first came on to the scene, the technology for LCD screens was different from today. The early screens worked (at a lower resolution that today's screens, of course) but could be viewed only from straight-on. Side viewers would not see the display.
With the (much ballyhooed) introduction of thin-film-transistors (TFTs), LCD displays were viewable at a wider angle. The development of TFTs was not easy, and it took a lot of work to get us there.
Now all LCD screens are viewable from a wide angle.
And we apparently don't want that.
I just saw an advertisement for an LCD screen cover that blocks the view from the side. It prevents another person from viewing the contents of your screen. The market is for executives travelling on airplanes, where a random person may sit next to them.
If we had just done nothing, we wouldn't have this problem and need this solution. Sometimes doing nothing is the right thing to do.
(Of course this ignores all of the people who *do* want to share their screen and let others view their information. but I think the point is worth considering. Sometimes, standing in one place is the best course of action.)
Wednesday, February 2, 2011
Excuse me, but your batch slip is showing
The larger, older companies have a problem. They computerized their systems a while ago, and now they have slow, customer-unfriendly systems. This is caused by technology: the early automated systems were (are) batch systems, and they exhibit certain patterns. Delays in output is one such pattern.
Here are some examples.
I submitted a stock trade order through the Chase web site on Sunday evening. I understood that the trade would occur during business hours in New York, so the earliest possible time of the trade would be 9:30 on Monday morning.
The confirmation e-mail of my trade arrived at 2:25 on Tuesday morning. I'm pretty sure that the trade was not completed in the wee hours of the morning. I'm guessing that the trade occurred shortly after the market opened on Monday, and probably no later that 10:00. Yet Chase needed more than 16 hours to send the confirmation e-mail.
I'm guessing that the person executing the trade had confirmation quite a bit sooner than 16 hours. Chase apparently thinks that customers can wait.
In another case, Chase sent me a "year end summary available" notice... on February 2. (At 2:28 in the morning. Apparently Chase sends all of its e-mails starting at 2:00 in the morning.) I can understand that Chase would want to wait a few days, to let merchants send transactions and get a complete picture of the year. But waiting a whole month seems a bit extreme. (I'm guessing that Chase waits until the end of the billing cycle in January, then waits a bit more, and then sends the notification.)
The culprit here is the batch processing model. With batch processing, the workload is divided. Front-end systems collect data and back-end systems run jobs at specific intervals. These back-end systems are responsible for updating amounts and sending e-mails. Certain operations occur "on-line", that is, immediately. These operations are considered expensive and are therefore limited to a selected set of data and performed for a selected set of users -- generally not customers. The result is that Chase defines customers as second-class citizens.
I am picking on Chase here, but other large companies that uses batch processing and have the same "second-class citizen" attitude towards their customers.
Large companies have relied on batch processing for decades. And for decades, batch processing has been good enough. But no more. A new kid in town has raised the bar, and customers are not going to be satisfied with being treated as second class.
That new kid is Facebook.
Facebook (and other social networking sites) have built systems that provide immediate feedback for results. When I post my status on Facebook, I receive responses from friends within minutes. (Not days, and not hours. Certainly not months!)
Banks and other companies do not have the excuse of size or transaction volume to use as excuses. Facebook has over 500 million users; I know of no banks with similar customer bases. Facebook processes more messages than all of the transactions processed by banks -- combined!
The new generation of customers has been raised on Facebook and its immediate response model. They will be unhappy with "business the way we've always done it" and "you're a customer and you will wait". The bank that provides immediate feedback will win lots of customers.
This is not a technology problem. This is a management problem. The management at Chase is satisfied with the performance of their systems. (That is, they are satisfied that their systems are offending customers, since a small enough number complain.)
At some point the customers will revolt, but it will happen unevenly: mostly the younger customers will take their business elsewhere. Since younger customer tend to have smaller account balances, the averages will show an increase in assets per customer. This will be perceived by Chase management as a good thing. ("Average assets per customer is up, boss!")
But customer demographics is a funny thing. Companies with few young customers tend to have short lives. Their older customers remain (until they pass into the next dimension), and without a new crop of young customers to replace them, the customer base eventually shrinks. In the end, the company collapses into a fit of bankruptcy and acquisition.
The strategy for survival is to improve customer communication to a level close to Facebook. When the stock trade is complete, let me know, by e-mail, text message, Twitter, Facebook, or other method of my choice. Send me a year-end statement on January 4. Let me configure my account and notifications they way I want them, not the way your archaic batch system allows.
Or not.
Subscribe to:
Posts (Atom)