The history of the PC is simple: After several companies, including Apple, Radio Shack, and Commodore, built and successfully sold microcomputers, IBM entered the market and its products were adopted as the standard. Yet I think there is more to it than that.
The IBM PC was a popular computer. Some historians claim that its open architecture and its superior design lead to its success. The architecture was open, more open than some but not all competing products. The design was good, but not superior -- and we are still paying for some of the compromises IBM made in those early days.
It was more than openness and design that made the IBM PC the standard. There were three other factors.
The first was a set of compelling applications. The "killer application" was the spreadsheet. Considered mundane today, the spreadsheet was a large step up from the manual methods of calculation that preceded the PC. It was far better than doing math by hand or even with the assistance of a mechanical or electronic calculator. Spreadsheets provided more automation, allowing for sophisticated formulas.
The other compelling (although perhaps not a "killer application") was word processing. Personal computers could be purchased for slightly more that electric typewriters, and word processors gave us the ability to compose, store, retrieve, and revise documents.
The second factor was a sense of urgency. Managers may not have understood personal computers or foreseen how they would change businesses, but they knew (or believed) that computers were the way of the future and they did not want to be left behind. Perhaps the fear was a remnant of the 1960s space race (with its jet motors, robots, and computers) or perhaps it was caused by the hype of IBM's advertising. Whatever the reason, there was a fear of becoming obsolete, of missing the computing boat. That fear pushed companies to adopt computers.
These two factors were enough to drive the PC revolution, but they were not enough to force the IBM PC as a standard. There were many competing systems, several which ran MS-DOS and Lotus 1-2-3, just like the IBM PC. There was a third factor, one that moved businesses into the IBM fold. That factor was the cost of PCs (IBM or otherwise).
Personal computers in the early 1980s were expensive devices. The price for a typical business system was in the $3000 to $5000 range. (And that was in 1980 dollars!)
The expense of a microcomputer meant that business had to think carefully about their investment. A wrong choice would mean a significant loss of capital. Businesses wanted to avoid that loss, so they went with a safe choice: IBM.
Those three factors (compelling applications, a sense of urgency, and risk avoidance) led to the IBM PC as a standard.
* * * * *
Looking at today's market, there is no analogous situation. Personal computers are inexpensive, standardized boxes. But what about new technologies?
Phones and tablets are inexpensive, with no compelling application for businesses. There may be a sense of urgency to build mobile apps for customers, but note that customer apps are outward-facing, not inward-facing like the spreadsheets and word processors of the PC revolution. Equipment for inward-facing applications can be standardized, as the equipment is under your control. Outward-facing apps cannot be easily standardized, as customers choose their hardware. That's a big difference between the PC revolution and the mobile revolution. There is not sufficient force to select one design as the standard -- and we have competing designs, Apple, Android, and Microsoft.
Cloud systems inward-facing, but not compelling. There is no sense of urgency to move one's systems to the cloud. Cloud systems are not expensive either -- at least not in the way that PCs were expensive. If anything, cloud providers focus on the ability to "pay for what you use". It's no surprise that there is no standard cloud system.
The same holds true for Big Data: inward-facing, but there is no compelling application, no sense of urgency. Sure enough, there are multiple "standards" for Big Data.
* * * * *
We don't have a standard for a smart phone, or tablet, or cloud system, or big data. Without the three factors of compelling applications, urgency, and risk avoidance, I think we will see multiple solutions for some time.
Tuesday, February 23, 2016
Thursday, February 18, 2016
The Bully in the Sandbox
Can competitive behavior be too effective?
Microsoft, over the years, competed aggressively in the Windows market. Microsoft products became dominant in Windows: Word, Excel, Access, PowerPoint, Project, Visual Studio, Internet Explorer...
Microsoft built a reputation as the bully of the Windows sandbox. They made it clear that competitors could exist in the Windows market only at Microsoft's sufferance. When a competitor built a product that made too much profit, or introduced a technology that threatened Microsoft's dominance, Microsoft built its own version of the product (or technology) and out-competed the challenger.
In the short term, this strategy gave Microsoft dominance (and profits) in the Windows market. In the long term, I suspect that the unfriendly Windows market spurred the development of other technologies and spaces. I imagine that some people created products for Apple computers, to avoid the Windows market. Others built web applications. These platforms were less risky than competing in the Windows space.
Microsoft has had little success with its phones and tablets. Oh, the Surface Pro tablets sell well enough, but mostly because they are small, portable Windows PCs running full desktop applications. Windows-based phones have not sold well. The market for Windows mobile apps (especially when compared to Apple's and Google's) is anemic.
I cannot help but think that Microsoft's previous behavior with Windows has made people reluctant to enter the Windows mobile market. Analysts claim that Microsoft was "too late" to enter the mobile market, and they may be right. Yet some part of the failure, I believe, is due to the threat of Microsoft resuming its former practices.
Which leaves Microsoft in a difficult position. It wants people to accept its mobile devices. For that, it needs apps, and it needs apps from other people and companies. Microsoft needs a thriving market to compete with Apple and Google. How to build interest in the market for Windows mobile?
The one strategy that Microsoft should avoid is filling their app store with Microsoft-built apps. A Microsoft-run store with only (or mostly) Microsoft-built apps will reinforce the notion that Microsoft is still a bully. People will avoid the Windows platform, thinking that they have no chance to exist for any length of time.
So perhaps there can be such a thing as "too competitive".
Microsoft, over the years, competed aggressively in the Windows market. Microsoft products became dominant in Windows: Word, Excel, Access, PowerPoint, Project, Visual Studio, Internet Explorer...
Microsoft built a reputation as the bully of the Windows sandbox. They made it clear that competitors could exist in the Windows market only at Microsoft's sufferance. When a competitor built a product that made too much profit, or introduced a technology that threatened Microsoft's dominance, Microsoft built its own version of the product (or technology) and out-competed the challenger.
In the short term, this strategy gave Microsoft dominance (and profits) in the Windows market. In the long term, I suspect that the unfriendly Windows market spurred the development of other technologies and spaces. I imagine that some people created products for Apple computers, to avoid the Windows market. Others built web applications. These platforms were less risky than competing in the Windows space.
Microsoft has had little success with its phones and tablets. Oh, the Surface Pro tablets sell well enough, but mostly because they are small, portable Windows PCs running full desktop applications. Windows-based phones have not sold well. The market for Windows mobile apps (especially when compared to Apple's and Google's) is anemic.
I cannot help but think that Microsoft's previous behavior with Windows has made people reluctant to enter the Windows mobile market. Analysts claim that Microsoft was "too late" to enter the mobile market, and they may be right. Yet some part of the failure, I believe, is due to the threat of Microsoft resuming its former practices.
Which leaves Microsoft in a difficult position. It wants people to accept its mobile devices. For that, it needs apps, and it needs apps from other people and companies. Microsoft needs a thriving market to compete with Apple and Google. How to build interest in the market for Windows mobile?
The one strategy that Microsoft should avoid is filling their app store with Microsoft-built apps. A Microsoft-run store with only (or mostly) Microsoft-built apps will reinforce the notion that Microsoft is still a bully. People will avoid the Windows platform, thinking that they have no chance to exist for any length of time.
So perhaps there can be such a thing as "too competitive".
Tuesday, January 19, 2016
Functional programming is waiting for microservices
Functional programming has been bubbling on the edge of mainstream development for years (perhaps decades). The advent of microservices may give functional programming its big break.
Why has it taken so long for the industry to adopt functional programming? I can think of a few reasons:
Existing systems are large and complex. Converting them to functional programming is a large effort, and one that must be done completely. The switch from procedural programming to object-oriented programming could be done gradually, especially if you were moving from C to C++. Functional programming has no gradual path.
Switching from object-oriented programming to functional programming incurs a cost, and the benefits are not so clear. The "return on investment" is not obvious.
These are all reasons to not switch from object-oriented programming to functional programming. (And they're pretty good reasons.)
Microservices change the equation.
Microservices are small, independent services. Changing an existing system (probably a large, monolithic system) to one that is composed of microservices is a large significant effort. It requires the decomposition of the monolith into smaller pieces.
If one is constructing small, independent services, either as part of a re-engineering effort or a new system, one can certainly consider functional programming -- if only for some of the microservices. Writing small functional programs is much easier than writing large systems in a functional language. (A statement that holds for object-oriented programming languages, and procedural languages, and assembly language.)
For microservices, a functional language may be better than an object-oriented one. Object-oriented languages are suited to large systems; classes and namespaces are designed for organizing the data and code of large systems. Microservices are, by design, small. It is possible that functional programming will be a better match for the small code of microservices.
If you are building microservices, or just contemplating microservices, think about functional programming.
Why has it taken so long for the industry to adopt functional programming? I can think of a few reasons:
- The effort to learn the techniques of functional programming
- The effort to convert existing programs
- The belief that current techniques are "good enough"
Existing systems are large and complex. Converting them to functional programming is a large effort, and one that must be done completely. The switch from procedural programming to object-oriented programming could be done gradually, especially if you were moving from C to C++. Functional programming has no gradual path.
Switching from object-oriented programming to functional programming incurs a cost, and the benefits are not so clear. The "return on investment" is not obvious.
These are all reasons to not switch from object-oriented programming to functional programming. (And they're pretty good reasons.)
Microservices change the equation.
Microservices are small, independent services. Changing an existing system (probably a large, monolithic system) to one that is composed of microservices is a large significant effort. It requires the decomposition of the monolith into smaller pieces.
If one is constructing small, independent services, either as part of a re-engineering effort or a new system, one can certainly consider functional programming -- if only for some of the microservices. Writing small functional programs is much easier than writing large systems in a functional language. (A statement that holds for object-oriented programming languages, and procedural languages, and assembly language.)
For microservices, a functional language may be better than an object-oriented one. Object-oriented languages are suited to large systems; classes and namespaces are designed for organizing the data and code of large systems. Microservices are, by design, small. It is possible that functional programming will be a better match for the small code of microservices.
If you are building microservices, or just contemplating microservices, think about functional programming.
Sunday, January 10, 2016
Use the tools available
I've just completed some work on a small project. My success is due, not only to my own talent and hard work, but to the tools that were available to me.
The project was in Ruby, and the tool that assisted me was Rubocop.
Rubocop analyzes Ruby code and reports on questionable (but legal) constructs and syntax. It is, in a phrase, "lint" for Ruby.
Almost all of the major languages have syntax checkers. For C and C++, there is lint. For C# there is FXCop. For Python, PyLint. Even Perl has Perl::Critic and Perl::Lint.
Rubocop helped me, indirectly. I used it as I developed the project. I ran it on the code I was writing, and it reported that certain functions were "too long", according to its default guidelines.
Some programmers would be arrogant and refuse to heed such advice. (Myself at an earlier point in my career, for example.) But with the wisdom of experience, I chose to modify the code and reduce the size of functions. It was an investment.
When I modified the long functions, I broke them into smaller ones. This had the benefit of making duplicate code obvious, as some of the smaller functions performed identical tasks. (The large versions of the functions also performed these identical tasks, but the duplications were not apparent.)
I combined duplicate functions into single functions, and reduced the overall size of the code. I also created abstract classes to hold functions common to concrete derived classes.
The simpler version of the code was, well, simpler. That meant that subsequent changes were easier to implement. In fact, one particular feature had me worried, yet the simpler code made that feature easy to add.
Rubocop helped me simplify the code, which made it easy for me to add new features -- and get them right. Rubocop was a tool, a useful tool.
On your projects, be aware of the tools that can help you.
The project was in Ruby, and the tool that assisted me was Rubocop.
Rubocop analyzes Ruby code and reports on questionable (but legal) constructs and syntax. It is, in a phrase, "lint" for Ruby.
Almost all of the major languages have syntax checkers. For C and C++, there is lint. For C# there is FXCop. For Python, PyLint. Even Perl has Perl::Critic and Perl::Lint.
Rubocop helped me, indirectly. I used it as I developed the project. I ran it on the code I was writing, and it reported that certain functions were "too long", according to its default guidelines.
Some programmers would be arrogant and refuse to heed such advice. (Myself at an earlier point in my career, for example.) But with the wisdom of experience, I chose to modify the code and reduce the size of functions. It was an investment.
When I modified the long functions, I broke them into smaller ones. This had the benefit of making duplicate code obvious, as some of the smaller functions performed identical tasks. (The large versions of the functions also performed these identical tasks, but the duplications were not apparent.)
I combined duplicate functions into single functions, and reduced the overall size of the code. I also created abstract classes to hold functions common to concrete derived classes.
The simpler version of the code was, well, simpler. That meant that subsequent changes were easier to implement. In fact, one particular feature had me worried, yet the simpler code made that feature easy to add.
Rubocop helped me simplify the code, which made it easy for me to add new features -- and get them right. Rubocop was a tool, a useful tool.
On your projects, be aware of the tools that can help you.
Which language?
When starting a project, the question you must answer is: Which language?
The answer is not simple.
If you're an Oracle shop then you are most likely comfortable with Oracle products and you should probably pick Java.
If you're a Microsoft shop then you will be comfortable with the Microsoft languages C# and F#.
If you're a Google shop, you may want to consider Google's Go language.
If you use Linux and open source tools, you may want to look at Perl, Python, Ruby, and JavaScript. Perl 6 has just been released, and may be a bit too "new" for serious enterprise projects. Python and Ruby are both mature, and JavaScript has a lot of support.
The interesting aspect here is that your choice of language is not about technology but relationships. All of these languages are capable. All of these languages have advocates, and detractors. None of these languages are perfect.
There are two other languages which you may consider. These are not connected with specific companies -- although implementations may be provided by companies. But the languages themselves are independent.
Those languages are COBOL and FORTRAN. Both are available from a number of sources, and for a number of platforms. COBOL is designed for financial transactions; FORTRAN for numerical computations. If your work falls into these categories, these languages are worth considering. (COBOL and FORTRAN are, however, not general-purpose languages and should not be considered for problems outside of their domains.)
Astute readers will note that I have omitted C and C++ from this discussion. If I were contemplating a move from Java to another language I would consider the above-listed languages before C or C++. And if I got to the point of considering C++, I would think very strongly about the STL and BOOST libraries.
All of these languages are capable. Each have advantages. A large consideration is the relationship that the language brings: Microsoft for C#, Google for Go, open source for Perl, Python, and Ruby. Don't ignore that.
The answer is not simple.
If you're an Oracle shop then you are most likely comfortable with Oracle products and you should probably pick Java.
If you're a Microsoft shop then you will be comfortable with the Microsoft languages C# and F#.
If you're a Google shop, you may want to consider Google's Go language.
If you use Linux and open source tools, you may want to look at Perl, Python, Ruby, and JavaScript. Perl 6 has just been released, and may be a bit too "new" for serious enterprise projects. Python and Ruby are both mature, and JavaScript has a lot of support.
The interesting aspect here is that your choice of language is not about technology but relationships. All of these languages are capable. All of these languages have advocates, and detractors. None of these languages are perfect.
There are two other languages which you may consider. These are not connected with specific companies -- although implementations may be provided by companies. But the languages themselves are independent.
Those languages are COBOL and FORTRAN. Both are available from a number of sources, and for a number of platforms. COBOL is designed for financial transactions; FORTRAN for numerical computations. If your work falls into these categories, these languages are worth considering. (COBOL and FORTRAN are, however, not general-purpose languages and should not be considered for problems outside of their domains.)
Astute readers will note that I have omitted C and C++ from this discussion. If I were contemplating a move from Java to another language I would consider the above-listed languages before C or C++. And if I got to the point of considering C++, I would think very strongly about the STL and BOOST libraries.
All of these languages are capable. Each have advantages. A large consideration is the relationship that the language brings: Microsoft for C#, Google for Go, open source for Perl, Python, and Ruby. Don't ignore that.
Sunday, January 3, 2016
Predictions for 2016
It's the beginning of a new year, which means... predictions! Whee!
Let's start with some obvious predictions:
Mobile will be big in 2016.
Cloud will be big on 2016.
NoSQL and distributed databases will be big in 2016.
Predictions like these are easy.
Now for something a little less obvious: legacy applications.
With the continued interest in mobile, cloud, NoSQL, and distributed databases, these areas will see strong demand for architects, developers, designers, and testers. That demand will pull people away from legacy applications -- those applications built for classic, non-cloud web architectures as well as the remaining desktop applications and mainframe batch systems.
Which is unfortunate for the managers of those legacy applications, because I believe that 2016 is going to be the year that companies decide that they want to migrate those legacy applications to the cloud/mobile platform.
When the web appeared, lots of managers held back, waiting to see if the platform would prove itself. It did, and companies migrated most of their applications from desktop to web (either external or internal). Even Microsoft, stalwart of desktop applications, created a web-based version of Outlook.
Likewise, when mobile and cloud appeared, many managers held back and waited for the new technologies to prove themselves. With almost ten years of mobile and cloud, and many companies already using those technologies, its time for the holdouts to take action.
Look for renewed interest in converting existing desktop and classic web applications. The conversions have challenges. In one sense, the job is easier than the early conversions, because we now have experience with mobile/cloud systems and we understand the architecture. In other ways, this may be harder, as the easy conversions (the "low-hanging fruit") have already been done, which means that the remaining conversions are harder.
The architecture of mobile/cloud systems (with or without distributed databases) is different from classic web applications. (And very different from desktop applications.)
I think that 2016 will be the year of rude awakening, as companies look at the effort to convert their legacy systems to newer technologies.
But the rude awakening is delivered in two phases. The first is the cost and time to convert legacy applications. The second is the cost of maintaining legacy applications in their current form.
Why the cost of maintaining legacy applications, without changing them to newer technologies? Because of the demand for mobile/cloud is high. New entrants to the field will know the new technologies, and select jobs that let them use that knowledge. That means that the folks with knowledge of the older technologies will be, um, older.
The folks with knowledge about older languages (C++, Visual Basic) and older APIs (Flash) will be the senior developers. And senior developers are more expensive than junior developers.
So the owners of legacy applications have a rather unpleasant choice: migrate to mobile/cloud, which is expensive, or stay on the legacy platform, with will also be expensive.
Let's start with some obvious predictions:
Mobile will be big in 2016.
Cloud will be big on 2016.
NoSQL and distributed databases will be big in 2016.
Predictions like these are easy.
Now for something a little less obvious: legacy applications.
With the continued interest in mobile, cloud, NoSQL, and distributed databases, these areas will see strong demand for architects, developers, designers, and testers. That demand will pull people away from legacy applications -- those applications built for classic, non-cloud web architectures as well as the remaining desktop applications and mainframe batch systems.
Which is unfortunate for the managers of those legacy applications, because I believe that 2016 is going to be the year that companies decide that they want to migrate those legacy applications to the cloud/mobile platform.
When the web appeared, lots of managers held back, waiting to see if the platform would prove itself. It did, and companies migrated most of their applications from desktop to web (either external or internal). Even Microsoft, stalwart of desktop applications, created a web-based version of Outlook.
Likewise, when mobile and cloud appeared, many managers held back and waited for the new technologies to prove themselves. With almost ten years of mobile and cloud, and many companies already using those technologies, its time for the holdouts to take action.
Look for renewed interest in converting existing desktop and classic web applications. The conversions have challenges. In one sense, the job is easier than the early conversions, because we now have experience with mobile/cloud systems and we understand the architecture. In other ways, this may be harder, as the easy conversions (the "low-hanging fruit") have already been done, which means that the remaining conversions are harder.
The architecture of mobile/cloud systems (with or without distributed databases) is different from classic web applications. (And very different from desktop applications.)
I think that 2016 will be the year of rude awakening, as companies look at the effort to convert their legacy systems to newer technologies.
But the rude awakening is delivered in two phases. The first is the cost and time to convert legacy applications. The second is the cost of maintaining legacy applications in their current form.
Why the cost of maintaining legacy applications, without changing them to newer technologies? Because of the demand for mobile/cloud is high. New entrants to the field will know the new technologies, and select jobs that let them use that knowledge. That means that the folks with knowledge of the older technologies will be, um, older.
The folks with knowledge about older languages (C++, Visual Basic) and older APIs (Flash) will be the senior developers. And senior developers are more expensive than junior developers.
So the owners of legacy applications have a rather unpleasant choice: migrate to mobile/cloud, which is expensive, or stay on the legacy platform, with will also be expensive.
Labels:
cloud,
legacy application,
mobile,
mobile/cloud,
old technology
Sunday, December 27, 2015
For the future of Java, look to Google
What is the future of Java? It is a popular language, perhaps a bit long in the tooth, yet still capable. It struggled under Sun. Now it is the property of Oracle.
Oracle is an interesting company, with a number of challenges. Its biggest challenge is the new database technologies that provide alternatives to SQL. Oracle built its fortune on the classic, ACID-based, SQL database, competing with IBM and Microsoft.
Now facing competition not only in the form of other companies but in new technologies, Oracle must perform. How will is use Java?
For the future of Java, I suggest that we look to Google and Android. Java is part of Android -- or at least the Java bytecode. Android apps are written in Java on standard PCs, compiled into Java bytecode, and then delivered to Android devices. The Android devices (phones, tablets, what-have-you) use not the standard JVM interpreter but a custom-made one named "Dalvik".
Oracle and Google have their differences. Oracle sued Google, successfully, for using the Java APIs. (A decision with which I disagree, but that is immaterial.)
Google now faces a decision: stay with Java or move to something else. Staying with Java will most likely paying Oracle a licensing fee. (Given Oracle's business practices, probably an exorbitant licensing fee.)
Moving to a different platform is equally expensive. Google will have to select a new language and make tools for developers. They will also have to assist developers with existing applications, allowing them to migrate to the new platform.
Exactly which platform Google picks isn't critical. Possibly Python; Google supports it in their App Engine. Another candidate is Google Go, which Google also supports in App Engine. (The latter would be a little more complicated, as Go compiles to executables and not bytecode, and I'm not sure that all Android devices have the same processor.)
Google's decision affects more that just Google and Android. It affects the entire market for Java. The two big segments for Java are server applications and Android applications. (Java as a teaching language is probably the third.) If Google were to move Android to another language, a full third of the Java market would disappear.
If you have a large investment in Java applications (or are considering building new Java applications), you may want to keep an eye on Google and Android.
Oracle is an interesting company, with a number of challenges. Its biggest challenge is the new database technologies that provide alternatives to SQL. Oracle built its fortune on the classic, ACID-based, SQL database, competing with IBM and Microsoft.
Now facing competition not only in the form of other companies but in new technologies, Oracle must perform. How will is use Java?
For the future of Java, I suggest that we look to Google and Android. Java is part of Android -- or at least the Java bytecode. Android apps are written in Java on standard PCs, compiled into Java bytecode, and then delivered to Android devices. The Android devices (phones, tablets, what-have-you) use not the standard JVM interpreter but a custom-made one named "Dalvik".
Oracle and Google have their differences. Oracle sued Google, successfully, for using the Java APIs. (A decision with which I disagree, but that is immaterial.)
Google now faces a decision: stay with Java or move to something else. Staying with Java will most likely paying Oracle a licensing fee. (Given Oracle's business practices, probably an exorbitant licensing fee.)
Moving to a different platform is equally expensive. Google will have to select a new language and make tools for developers. They will also have to assist developers with existing applications, allowing them to migrate to the new platform.
Exactly which platform Google picks isn't critical. Possibly Python; Google supports it in their App Engine. Another candidate is Google Go, which Google also supports in App Engine. (The latter would be a little more complicated, as Go compiles to executables and not bytecode, and I'm not sure that all Android devices have the same processor.)
Google's decision affects more that just Google and Android. It affects the entire market for Java. The two big segments for Java are server applications and Android applications. (Java as a teaching language is probably the third.) If Google were to move Android to another language, a full third of the Java market would disappear.
If you have a large investment in Java applications (or are considering building new Java applications), you may want to keep an eye on Google and Android.
Subscribe to:
Posts (Atom)