Microsoft recently announced a new version of its Office suite (version 13), and included support for the ODF format. This is big news.
The decision to support ODF does not mean that the open source fanboys have "won".
As I see it, the decision to support ODF means that Microsoft has changed its strategy.
Microsoft became dominant in Windows applications, in part due to the proprietary formats of Microsoft Office and the network effect: everyone wanted Microsoft Office (and nothing else) because everyone that they knew (and with whom they exchanged documents) used Microsoft Office. The proprietary format ensured that one used the true Microsoft Office and not a clone or compatible suite.
Microsoft used that network effect to drive people to Windows (providing a Mac version of Office that was close but not quite the same as the Windows version). Their strategy was to sell licenses for Microsoft Windows, Microsoft Office, Microsoft Active Directory, Microsoft Exchange, Microsoft SQL Server, and other Microsoft products, all interlocking and using proprietary formats for storage.
And that strategy worked for two decades, from 1990 to 2010.
Several lawsuits and injunctions forced Microsoft to open their formats to external players. Once they did, other office suites gained the ability to read and write files for Office.
With Microsoft including the ODF formats in Office, they are no longer relying on proprietary file formats. Which means that they have some other strategy in mind.
That new strategy remains to be seen. I suspect that it will include their Surface tablets and Windows smartphones. I also expect cloud computing (in the form of Windows Azure) to be part of the strategy too.
The model of selling software on shiny plastic discs has come to an end. With that change comes the end of the desktop model of computing, and the dawn of the tablet model of computing.
Wednesday, August 22, 2012
Sunday, August 19, 2012
Windows 8 is like Y2K, sort of
When an author compares an event to Y2K, the reader is prudent to attend with some degree of skepticism. The Y2K problem was large and affected multiple platforms across all industries. The threat of mobile/cloud computing (if it can even be considered a threat) must be large and wide-spread to stand against Y2K.
I will say up front that the mobile/cloud platform is not a threat. If anything, it is an expansion of technical options for systems, a liberalization of solution sets.
Nor does the mobile/cloud platform have a specific implementation date. With Y2K, we had a very hard deadline for changes. (Deadlines varied across systems, with some earlier than others. For example, bank systems that calculated thirty-year mortgages were corrected in 1970.)
But the change from traditional web architectures to mobile/cloud is significant, and the transition from desktop applications to mobile/cloud is greater. The change from desktop to mobile/cloud requires nothing less than a complete re-build of the application: new UI, new data storage, new system architecture.
And it is these desktop applications (which invariably run under Microsoft Windows) that have an impending crisis. These desktop applications run on "classic" Windows, the Windows of Win32 and MFC and even .NET. These desktop applications have user interfaces that require keyboards and mice. These desktop applications assume constant and fast access to network resources.
One may wonder how these desktop applications, while they may be considered "old-fashioned" and "not of the current tech", can be a problem. After all, as long as we have Windows, we can run them, right?
Well, not quite. As long as we have Windows with Win32 and MFC and .NET (and ODBC and COM and ADO) then we can run them. But there is nothing that says Microsoft will continue to include these packages in Windows. In fact, the new WinRT offering does not include them.
Windows 8, on a desktop PC, runs in two modes: Windows 8 mode and "classic" mode. The former runs apps built for the mobile/loud platform. The latter is much like the old DOS compatibility box, included in Windows to allow us to run old, command-line programs. The "classic" Windows mode is present in Windows 8 as a measure to allow us (the customers and users of Windows) to transition our applications to the new UI.
Microsoft will continue to release new versions of Windows. I am reasonably sure that Microsoft is working on "Windows 9" even with the roll-out of Windows 8 under way. New versions of Windows will come out with new features.
At some point, the "classic Windows compatibility box" will go away. Microsoft may remove it in stages, perhaps making it a plug-in that can be added to the base Windows package. Or perhaps it will be available in only the premium versions of Windows. It is possible that, like the DOS command prompt that yet remains in Windows, the "classic Windows compatibility box" will remain in Windows -- but I doubt it. Microsoft likes the new revenue model of mobile/cloud.
And this is how I see mobile/cloud as a Y2K-like challenge. When the "classic Windows compatibility box" goes away, all of the old-style applications must go away too. You will have to either migrate to the new Windows 8 UI (and the architecture that such a change entails) or you will have to go without.
Web applications are less threatened by mobile/cloud. They run in browsers; the threat to them will be the loss of the browser. That is another topic.
If I were running a company (large or small) I would plan to move to the new world of mobile/cloud. I would start by inventorying all of my current desktop applications and forming plans to move them to mobile/cloud. That process is also another topic.
Comparing mobile/cloud to Y2K is perhaps a bit alarmist. Yet action must be taken, either now or later. My advice: start planning now.
I will say up front that the mobile/cloud platform is not a threat. If anything, it is an expansion of technical options for systems, a liberalization of solution sets.
Nor does the mobile/cloud platform have a specific implementation date. With Y2K, we had a very hard deadline for changes. (Deadlines varied across systems, with some earlier than others. For example, bank systems that calculated thirty-year mortgages were corrected in 1970.)
But the change from traditional web architectures to mobile/cloud is significant, and the transition from desktop applications to mobile/cloud is greater. The change from desktop to mobile/cloud requires nothing less than a complete re-build of the application: new UI, new data storage, new system architecture.
And it is these desktop applications (which invariably run under Microsoft Windows) that have an impending crisis. These desktop applications run on "classic" Windows, the Windows of Win32 and MFC and even .NET. These desktop applications have user interfaces that require keyboards and mice. These desktop applications assume constant and fast access to network resources.
One may wonder how these desktop applications, while they may be considered "old-fashioned" and "not of the current tech", can be a problem. After all, as long as we have Windows, we can run them, right?
Well, not quite. As long as we have Windows with Win32 and MFC and .NET (and ODBC and COM and ADO) then we can run them. But there is nothing that says Microsoft will continue to include these packages in Windows. In fact, the new WinRT offering does not include them.
Windows 8, on a desktop PC, runs in two modes: Windows 8 mode and "classic" mode. The former runs apps built for the mobile/loud platform. The latter is much like the old DOS compatibility box, included in Windows to allow us to run old, command-line programs. The "classic" Windows mode is present in Windows 8 as a measure to allow us (the customers and users of Windows) to transition our applications to the new UI.
Microsoft will continue to release new versions of Windows. I am reasonably sure that Microsoft is working on "Windows 9" even with the roll-out of Windows 8 under way. New versions of Windows will come out with new features.
At some point, the "classic Windows compatibility box" will go away. Microsoft may remove it in stages, perhaps making it a plug-in that can be added to the base Windows package. Or perhaps it will be available in only the premium versions of Windows. It is possible that, like the DOS command prompt that yet remains in Windows, the "classic Windows compatibility box" will remain in Windows -- but I doubt it. Microsoft likes the new revenue model of mobile/cloud.
And this is how I see mobile/cloud as a Y2K-like challenge. When the "classic Windows compatibility box" goes away, all of the old-style applications must go away too. You will have to either migrate to the new Windows 8 UI (and the architecture that such a change entails) or you will have to go without.
Web applications are less threatened by mobile/cloud. They run in browsers; the threat to them will be the loss of the browser. That is another topic.
If I were running a company (large or small) I would plan to move to the new world of mobile/cloud. I would start by inventorying all of my current desktop applications and forming plans to move them to mobile/cloud. That process is also another topic.
Comparing mobile/cloud to Y2K is perhaps a bit alarmist. Yet action must be taken, either now or later. My advice: start planning now.
Wednesday, August 15, 2012
Cloud productivity is not always from the cloud
Lots of people proclaim the performance advantages of cloud computing. These folks, I think, are mostly purveyors of cloud computing services. Which does not mean that cloud computing has no advantages or offers no improvements in performance. But it also does not mean that all improvements from migrating to the cloud are derived from the cloud.
Yes, cloud computing can reduce administration costs, mostly by standardizing the instances of hosts to a simple set of virtualized machines.
And yes, cloud computing can reduce the infrastructure costs of servers, since the cloud provider leverages economies of scale (and virtualized servers).
But a fair amount of the performance improvement of cloud computing comes from the re-architecturing of applications. Changing one's applications from monolithic, one-program-does-it-all to smaller collaborative apps working with common data stores and message queues has an affect on performance. Shifting from object-oriented programming to the immutable-object programming needed for cloud computing also improves performance.
Keep in mind that these architectural changes can be done with your current infrastructure -- you don't need cloud to make them.
You can re-architect your applications (no small task, I will admit) and use them in your current environment (adding data stores and message queues) and get those same improvements in performance. Not all of the improvements from moving to a cloud infrastructure, but the portion that arises from the collaborative architecture.
And such a move would prepare your applications to move to a cloud infrastructure.
Yes, cloud computing can reduce administration costs, mostly by standardizing the instances of hosts to a simple set of virtualized machines.
And yes, cloud computing can reduce the infrastructure costs of servers, since the cloud provider leverages economies of scale (and virtualized servers).
But a fair amount of the performance improvement of cloud computing comes from the re-architecturing of applications. Changing one's applications from monolithic, one-program-does-it-all to smaller collaborative apps working with common data stores and message queues has an affect on performance. Shifting from object-oriented programming to the immutable-object programming needed for cloud computing also improves performance.
Keep in mind that these architectural changes can be done with your current infrastructure -- you don't need cloud to make them.
You can re-architect your applications (no small task, I will admit) and use them in your current environment (adding data stores and message queues) and get those same improvements in performance. Not all of the improvements from moving to a cloud infrastructure, but the portion that arises from the collaborative architecture.
And such a move would prepare your applications to move to a cloud infrastructure.
Wednesday, August 8, 2012
$1000 per hour
Let's imagine that you are a manager of a development team. You hire (and fire) members of the team, set goals, review performance, and negotiate deliverables with your fellow managers.
Now let's imagine that the cost of developers is significantly higher than it is today. Instead of paying the $50,000 to $120,000 per year, you must pay $1000 per hour, or $2,000,000 per year. (That's two million dollars per year.) Let's also imagine that you cannot reduce this cost through outsourcing or internships.
What would you do?
Here is what I would do:
In other words, I would do everything in my power to make them productive. When their time costs money, saving their time saves me money. Sensible, right?
But the current situation is the same. Developers cost me money. Saving their time saves me money.
So why aren't you doing what you can to save them time?
Now let's imagine that the cost of developers is significantly higher than it is today. Instead of paying the $50,000 to $120,000 per year, you must pay $1000 per hour, or $2,000,000 per year. (That's two million dollars per year.) Let's also imagine that you cannot reduce this cost through outsourcing or internships.
What would you do?
Here is what I would do:
- I would pick the best of my developers and fire the others. A smaller team of top-notch developers is more productive than a large team of mediocre developers.
- I would provide my developers with tools and procedures to let them be the most productive. I would weigh the cost of development tools against the time that they would save.
- I would use automated testing as much as possible, to reduce the time developers spend on manual testing. If possible, I would automate all testing.
- I would provide books, web resources, online training, and conferences to the developers, to give them the best information and techniques on programming.
In other words, I would do everything in my power to make them productive. When their time costs money, saving their time saves me money. Sensible, right?
But the current situation is the same. Developers cost me money. Saving their time saves me money.
So why aren't you doing what you can to save them time?
Sunday, August 5, 2012
The evolution of the UI
Since the beginning of the personal computing era, we have seen different types of user interfaces. These interfaces were defined by technology. The mobile/cloud age brings us another type of user interface.
The user interfaces were:
- Text-mode programs
- Character-based graphic programs
- True GUI programs
- Web programs
Text-mode programs were the earliest of programs, run on the earliest of hardware. Sometimes run on printing terminals (Teletypes or DECwriters), they had to present output in linear form -- the hardware operated linearly, one character after another. When we weren't investigating problems with hardware, or struggling to software, we dreamed about better displays. (We had seen them in the movies, after all.)
Character-based graphic programs used the capabilities of the "more advanced" hardware such as smart terminals and even the IBM PC. We could draw screens with entry fields -- still in character mode, mind you -- and use different colors to highlight things. The best-known programs from this era would be Wordstar, WordPerfect, Visicalc, and Lotus 1-2-3.
True GUI programs came about with IBM's OS/2, Atari's GEM, and Microsoft's Windows. These were the programs that we wanted! Individual windows that could be moved and resized, fine control of the graphics, and lots of colors! Of course, such programs were only possible with the hardware and software to support them. The GUI programs needed hefty processors and powerful languages for event-driven programming.
The web started in life as a humble method of viewing (and linking) documents. It grew quickly, and web programming surpassed the simple task of documents. It went on to give us applications such as brochures, shopping sites, and eventually e-mail and word processing.
But a funny thing happened on the way to the web. We kept looking back at GUI programs. We wanted web programs to behave like desktop PC programs.
Looking back was unusual. In the transition from text-mode programs to character-based graphics, we looked forward. A few programs, usually compilers and other low-output programs, stayed in text-mode, but everything else moved to character-based graphics.
In the transition from character-based graphics to GUI, we also looked forward. We knew that the GUI was a better form of the interface. No one (well, with the exception of perhaps a few ornery folks) wanted to stay with the character-based UI.
But with the transition from desktop applications and their GUI to the web and its user interface, there was quite a lot of looking back. People invested time and money in building web applications that looked and acted like GUI programs. Microsoft went to great lengths to enable developers to build apps that ran on the web just as they had run on the desktop.
The web UI never came into its own. And it never will.
The mobile/cloud era has arrived. Smartphones, tablets, cloud processing are all available to us. Lots of folks are looking at this new creature. And it seems that lots of people are asking themselves: "How can we build mobile/cloud apps that look and behave like GUI apps?"
I believe that this is the wrong question.
The GUI was a bigger, better incarnation of the character-based UI. Anything the former could do, the latter could do. And prettier. It was a nice, simple progression.
Improvements rarely follow nice simple progressions. Changes in technology are chaotic, with people thinking all sorts of new ideas in all sorts of places. The web is not a bigger, better PC and its user interface was not a bigger, better desktop GUI. Mobile/cloud computing is not a bigger, better web, and its interface is not a bigger, better web interface. The interface for mobile/cloud shares many aspects with the web UI, and some aspects with the desktop GUI, but they have their unique advantages.
To be successful, identify the differences and leverage them in your organization.
Mobile/cloud needs a compelling application
There has been a lot of talk about cloud computing (or as I call it, mobile/cloud) but perhaps not so much in the way of understanding. While some people understand what mobile/cloud is, they don't understand how to use it. They don't know how to leverage it. And I think that this is part of the adoption of mobile/cloud, or any new technology.
Let's look back at personal computers, and how they were adopted.
When PCs first appeared, companies did not know what to make of them. Hobbyists and enthusiastic individuals had been tinkering with them for a few years, but companies -- that is, the bureaucratic entity of policies and procedures -- had no specific use for them. An economist might say that there was no demand for them.
Companies used PCs as replacements for typewriters, or as replacements for office word processing systems. They were minor upgrades to existing technologies.
Once business-folk saw Visicalc and Lotus 1-2-3, however, things changed. The spreadsheet enabled people to analyze data and make better decisions. (And without a request to the DP department!) Businesses now viewed PCs as a way to improve productivity. This increased demand, because what business doesn't want to improve productivity?
But it took that first "a-ha" moment, that first insight into the new technology's capabilities. Someone had to invent a compelling application, and then others could think "Oh, that is what they can do!" The compelling application shows off the capabilities of the new technology in terms that the business can understand.
With mobile/cloud technology, we are still in the "what can it do" stage. We have yet to meet the compelling application. Mobile/cloud technology has been profitable for the providers (such as Amazon.com) but not for the users (companies such as banks, pharmaceuticals, and manufacturers). Most 'average' companies (non-technology companies) are still looking at this mobile/cloud thing and asking themselves "how can we leverage it?".
It is a good question to ask. We will keep asking it until someone invents the compelling application.
I don't know the nature of the compelling application for mobile/cloud. It could be a better form of e-mail/calendar software. It might be analysis of internal operations. It could be a new type of customer relationship management system.
I don't know for certain that there will be a compelling application. If there is, then mobile/cloud will "take off", with demand for mobile/cloud apps and conversion of existing apps to mobile/cloud. If there is no compelling application, then mobile/cloud won't necessarily die, but will fade into the background (like virtualization is doing now).
I expect that there will be a compelling application, and that mobile/cloud will be popular. I expect our understanding of mobile/cloud to follow the path of previous new technologies: awareness, understanding, application to business, and finally acceptance as the norm.
Labels:
cloud computing,
mobile apps,
mobile/cloud,
new technology
Tuesday, July 31, 2012
... or the few
Software development is maturing.
I suppose that is has been since the Eldar days, when programming meant wiring plug-boards. Programming languages and tools have been changing since the first symbolic assemblers.
There have been two types of changes ("improvements"?) for programming: changes that benefit the individual programmer and changes that benefit the team.
Changes that benefit the individual include:
- FORTRAN
- Unix, which let a programmer use many tools and even build his own
- Integrated development environments
- Interactive debuggers
- Faster processors and increased memory
- Unit tests (especially automated unit tests)
- Diagnostic programs such as 'lint'
- Managed environments with garbage collection (eliminating the need for 'delete' operations)
- Early forms of version control systems
Changes that benefit the team include:
- COBOL, with its separation of concerns and support for discrete modules
- Early operating systems that scheduled jobs and managed common resources
- The 'patch' program, which allowed for updates without transmitting the entire source
- Network connections (especially internet connections)
- Code reviews
- Distributed version control systems
- github
One can argue that any of these technologies help individuals and teams. A better debugger may directly help the programmer, yet it helps the team by making the developer more productive. Some technologies are better for the individual, and others better for the team. But that's not the point.
The point is that we, as an industry, look to improve performance for individuals and teams, not just individuals. We look at teams as more than a collection of individuals. If we considered teams nothing more than a group of people working on the same project, we would focus our efforts on individual performance and nothing else. We would develop better tools for individuals, and not for teams.
Some languages are designed for teams, or at least have aspects designed for teams. The Python and Go languages have strong ideas about code style; Python enforces rules for indentation and Go rules for bracket placement. They are not the first; Visual Basic would auto-space and auto-capitalize your source code. That may have been a function of the editor, but since the editor and interpreter were distributed together one can consider the action part of the language.
Python's indentation rules, Go's bracket rules, and Visual Basic's auto-capitalization are all beneficial to the individual programmer, but they are more beneficial to the team. They enforce style upon the source code. Such enforced style ensures that all members of the team (indeed, all members of those programming communities) use the same style. Programmers can more easily move from one project to another, and from code contributed by one person to another. Some programming teams () using other languages) enforce local styles, but these languages have it built into their DNA.
Enforced style is a move against the "cowboy coder", the rebel programmer who insists on doing things "his way". Programming languages are not a democracies -- team members don't get to vote on the rules of the language that they are using -- but it is a step towards government.
Let's count that as maturation.
I suppose that is has been since the Eldar days, when programming meant wiring plug-boards. Programming languages and tools have been changing since the first symbolic assemblers.
There have been two types of changes ("improvements"?) for programming: changes that benefit the individual programmer and changes that benefit the team.
Changes that benefit the individual include:
- FORTRAN
- Unix, which let a programmer use many tools and even build his own
- Integrated development environments
- Interactive debuggers
- Faster processors and increased memory
- Unit tests (especially automated unit tests)
- Diagnostic programs such as 'lint'
- Managed environments with garbage collection (eliminating the need for 'delete' operations)
- Early forms of version control systems
Changes that benefit the team include:
- COBOL, with its separation of concerns and support for discrete modules
- Early operating systems that scheduled jobs and managed common resources
- The 'patch' program, which allowed for updates without transmitting the entire source
- Network connections (especially internet connections)
- Code reviews
- Distributed version control systems
- github
One can argue that any of these technologies help individuals and teams. A better debugger may directly help the programmer, yet it helps the team by making the developer more productive. Some technologies are better for the individual, and others better for the team. But that's not the point.
The point is that we, as an industry, look to improve performance for individuals and teams, not just individuals. We look at teams as more than a collection of individuals. If we considered teams nothing more than a group of people working on the same project, we would focus our efforts on individual performance and nothing else. We would develop better tools for individuals, and not for teams.
Some languages are designed for teams, or at least have aspects designed for teams. The Python and Go languages have strong ideas about code style; Python enforces rules for indentation and Go rules for bracket placement. They are not the first; Visual Basic would auto-space and auto-capitalize your source code. That may have been a function of the editor, but since the editor and interpreter were distributed together one can consider the action part of the language.
Python's indentation rules, Go's bracket rules, and Visual Basic's auto-capitalization are all beneficial to the individual programmer, but they are more beneficial to the team. They enforce style upon the source code. Such enforced style ensures that all members of the team (indeed, all members of those programming communities) use the same style. Programmers can more easily move from one project to another, and from code contributed by one person to another. Some programming teams () using other languages) enforce local styles, but these languages have it built into their DNA.
Enforced style is a move against the "cowboy coder", the rebel programmer who insists on doing things "his way". Programming languages are not a democracies -- team members don't get to vote on the rules of the language that they are using -- but it is a step towards government.
Let's count that as maturation.
Subscribe to:
Posts (Atom)