Every year, Linux enthusiasts hope that the new year will be the "year of the Linux desktop", the year that Linux dethrones Microsoft Windows as the chief desktop operating system.
I have bad news for the Linux enthusiasts.
There is no Linux desktop.
More specifically, there is not one Linux desktop. Instead, there is a multitude. There are multiple Linux distributions ("distros" in jargon) and it seems that each has its own ideas about the desktop. Some emulate Microsoft Windows, in an attempt to make it easy for people to convert from Windows to Linux. Other distros do things their own (and presumably better) way. Some distros focus on low-end hardware, others focus on privacy. Some focus on forensics, and others are tailored for tinkerers.
Distributions include: Debian, Ubuntu, Mint, SuSE, Red Hat, Fedora, Arch Linux, Elementary, Tails, Kubuntu, CentOS, and more.
The plethora of distributions splits the market. No one distribution is the "gold standard". No one distribution is the leader.
Here's what I consider the big problem for Linux: The split market discourages some software vendors from entering it. If you have a new application, do you support all of the distros or just some? Which ones? How do you test all of the distros that you support? What do you do with customers who use distros that you don't support?
Compared to Linux, the choice of releasing for Windows and macOS is rather simple. Either you support Windows or you don't. (And by "Windows" I mean "Windows 10".) Either you support mac OS or you don't. (The latest version of mac OS.) Windows and macOS each provide a single platform, with a single installation method, and a single API. (Yes, I am simplifying here. Windows has multiple ways to install an application, but it is clear that Microsoft is transitioning to the Universal app.)
I see nothing to reduce the number of Linux distros, so this condition will continue. We will continue to enjoy the benefits of multiple Linux distributions, and I believe that to be good for Linux.
But it does mean that the Evil Plan to take over all desktops will have to wait.
Monday, March 4, 2019
Wednesday, February 27, 2019
R shows that open source permits poor quality
We like to think that open source projects are better than closed source projects. Not just cheaper, but better -- higher quality, more reliable, and easier to use. But while high quality and reliability and usability may be the result of some open source projects, they are not guaranteed.
Consider the R tool chain, which includes the R interpreter, the Rmd markdown language, the Rstudio IDE, and commonly used models built in R. All of these are open source, and all have significant problems.
The R interpreter is a large and complicated program. It is implemented in multiple programming languages: C, C++, Fortran -- and Java seems to be part of the build as well. To build R you need compilers for all of these languages, and you also need lots of libraries. The source code is not trivial; it takes quite a bit of time to compile the R source and get a working executable.
The time for building R concerns me less than the mix of languages and the number of libraries. R sits on top of a large stack of technologies, and a problem in any piece can percolate up and become a problem in R. If one is lucky, R will fail to run; if not, R will run and use whatever data happens to be available after the failure.
The R language itself has problems. It uses one-letter names for common functions ('t' to transpose a matrix, 'c' to combine values into a list) which means that these letters are not available for "normal" variables. (Or perhaps they are, if R keeps variables and functions in separate namespaces. But even then, a program would be confusing to read.)
R also suffers from too many data containers. One can have a list, which is different from a vector, which is different from a matrix, which is different from a data frame. The built-in libraries all expect data of some type, but woe to he that uses one when a different structure is expected. (Some functions do the right thing, and others complain about a type mismatch.)
Problems are not confined to the R language. The Rmd markdown language is another problem area. Rmd is based on Markdown, which has problems of its own. Rmd inherits these problems and adds more. A document in Rmd can contain plain text, markdown for text effects such as bold and underline, blocks of R code, blocks of Tex. Rmd is processed into regular Markdown, which is then processed into the output form of your choice (PDF, HTML, MS-Word, and a boatload of other formats).
Markdown allows you to specify line breaks by typing two space characters at the end of a line. (Invisible markup at the end of a line! I thought 'make' had poor design with TAB characters at the front of lines.) Markdown also allows you to force a line break with a backslash at the end of a line, which is at least visible -- but Rmd removes this capability and requires the invisible characters at the end of a line.
The Rstudio IDE is perhaps the best of the different components of R, yet it too has problems. It adds packages as needed, but when it does it displays status messages in red, a color usually associated with errors or warnings. It allows one to create documents in R or Rmd format, asking for a name. But for Rmd documents, the name you enter is not the name of the file; it is inserted into the file as part of a template. (Rmd documents contain metadata in the document, and a title is inserted into the metadata.) When creating an Rmd document in Rstudio, you have to start the process, enter a name to satisfy the metadata, see the file displayed, and then save the file -- Rstudio then asks you (again) for a name -- but this time it is for the file, not the metadata.
The commonly used models (small or not-so-small programs written in R or a mix of languages) is probably the worst area of the R ecosystem. The models can perform all sorts of calculations, and the quality of the models ranges from good to bad. Some models, such as those for linear programming, use variables and formulas to specify the problem you want solved. But the variables of the model are not variables in R; do not confuse the two separate things with the same name. There are two namespaces (one for R and one for the model) and each namespace holds variables. The programmer must mentally keep variables sorted. Referring to a variable in the wrong namespace yields the expected "variable not found" error.
Some models have good error messages, others do not. One popular model for linear programming, upon finding a variable name that has not been specified for the model's namespace, simply reports "A variable was specified that is not part of the model." (That's the entire message. It does not report the offending name, nor even a program line number. You have to hunt through your code to find the problem name.)
Given the complexity of R, the mix of languages in Rmd, the foibles of Rstudio, and the mediocre quality of commonly used extensions to R, I can say that R is a problem. The question is, how did this situation arise? All of the components are open source. How did open source "allow" such poor quality?
Its a good question, and I don't have a definite answer. But I do have an idea.
We've been conditioned to think of open source as a way to develop quality projects from the commonly known successful open source projects: Linux, Perl, Python, Ruby, and OfficeLibre. These projects are well-respected and popular with many users. They have endured; each is over ten years old. (There are other well-respected open source projects, too.)
When we think of these projects, we see success and quality. Success and quality is the result of hard work, dedication, and a bit of luck. These projects had all of those elements. Open source, by itself is not enough to force a result of high quality.
These successful projects have been run by developers, and more importantly, for developers. That is certainly true of the early, formative years of Linux, and true for any open source programming language. I suspect that the people working on OfficeLibre are primarily developers.
I believe that this second concept does not hold for the R ecosystem. I suspect that the people working on the R language, the Rmd markdown format, and especially the people building the commonly used models are first and foremost data scientists and analysts, and developers second. They are building R for their needs as data scientists.
(I omit Rstudio from this list. It appears that Rstudio is built to be a commercial endeavor, which means that their developers are paid to be developers. It makes status messages in red even more embarrassing.)
I will note that the successful open source projects have had an individual as a strong leader for the project. (Linux has Linus Torvalds, Perl has Larry Wall, etc.) I don't see a strong individual leading the R ecosystem or any component. These projects are -- apparently -- run by committee, or built by separate teams. It is a very Jeffersonian approach to software development, one which may have an effect on the quality of the result. (Again, this is all an idea. I have not confirmed this.)
Where does this leave us?
First, am reluctant to trust any important work to R. There are too many "moving pieces" for my taste -- too many technologies, too many impediments to good code, too many things that can go wrong. The risks outweigh the benefits.
Second, in the long term, we may move away from R. The popularity of R is not the R language, it is the ability to create (or use) linear programming models. Someone will create a platform for analysis, with the ability to define and run linear programming models. It will "just work", to borrow a phrase from Apple.
Moving away from R and the current toolchain is not guaranteed. The current tools may have achieved a "critical mass" of acceptance in the data science community, and the cost of moving to a different toolchain may viewed as unacceptable. In that case, the data science community can look forward to decades of struggles with the tools.
The real lesson is that open source does not guarantee quality software. R and the models are open source, but the quality is... mediocre. High quality requires more than just open source. Open source may be, as the mathematicians say, "necessary but not sufficient". We should consider this when managing any of our projects. Starting a project as open source will not guarantee success, nor will converting an existing project to open source. A successful project needs more: capable management, good people, and a vision and definition of success.
Consider the R tool chain, which includes the R interpreter, the Rmd markdown language, the Rstudio IDE, and commonly used models built in R. All of these are open source, and all have significant problems.
The R interpreter is a large and complicated program. It is implemented in multiple programming languages: C, C++, Fortran -- and Java seems to be part of the build as well. To build R you need compilers for all of these languages, and you also need lots of libraries. The source code is not trivial; it takes quite a bit of time to compile the R source and get a working executable.
The time for building R concerns me less than the mix of languages and the number of libraries. R sits on top of a large stack of technologies, and a problem in any piece can percolate up and become a problem in R. If one is lucky, R will fail to run; if not, R will run and use whatever data happens to be available after the failure.
The R language itself has problems. It uses one-letter names for common functions ('t' to transpose a matrix, 'c' to combine values into a list) which means that these letters are not available for "normal" variables. (Or perhaps they are, if R keeps variables and functions in separate namespaces. But even then, a program would be confusing to read.)
R also suffers from too many data containers. One can have a list, which is different from a vector, which is different from a matrix, which is different from a data frame. The built-in libraries all expect data of some type, but woe to he that uses one when a different structure is expected. (Some functions do the right thing, and others complain about a type mismatch.)
Problems are not confined to the R language. The Rmd markdown language is another problem area. Rmd is based on Markdown, which has problems of its own. Rmd inherits these problems and adds more. A document in Rmd can contain plain text, markdown for text effects such as bold and underline, blocks of R code, blocks of Tex. Rmd is processed into regular Markdown, which is then processed into the output form of your choice (PDF, HTML, MS-Word, and a boatload of other formats).
Markdown allows you to specify line breaks by typing two space characters at the end of a line. (Invisible markup at the end of a line! I thought 'make' had poor design with TAB characters at the front of lines.) Markdown also allows you to force a line break with a backslash at the end of a line, which is at least visible -- but Rmd removes this capability and requires the invisible characters at the end of a line.
The Rstudio IDE is perhaps the best of the different components of R, yet it too has problems. It adds packages as needed, but when it does it displays status messages in red, a color usually associated with errors or warnings. It allows one to create documents in R or Rmd format, asking for a name. But for Rmd documents, the name you enter is not the name of the file; it is inserted into the file as part of a template. (Rmd documents contain metadata in the document, and a title is inserted into the metadata.) When creating an Rmd document in Rstudio, you have to start the process, enter a name to satisfy the metadata, see the file displayed, and then save the file -- Rstudio then asks you (again) for a name -- but this time it is for the file, not the metadata.
The commonly used models (small or not-so-small programs written in R or a mix of languages) is probably the worst area of the R ecosystem. The models can perform all sorts of calculations, and the quality of the models ranges from good to bad. Some models, such as those for linear programming, use variables and formulas to specify the problem you want solved. But the variables of the model are not variables in R; do not confuse the two separate things with the same name. There are two namespaces (one for R and one for the model) and each namespace holds variables. The programmer must mentally keep variables sorted. Referring to a variable in the wrong namespace yields the expected "variable not found" error.
Some models have good error messages, others do not. One popular model for linear programming, upon finding a variable name that has not been specified for the model's namespace, simply reports "A variable was specified that is not part of the model." (That's the entire message. It does not report the offending name, nor even a program line number. You have to hunt through your code to find the problem name.)
Given the complexity of R, the mix of languages in Rmd, the foibles of Rstudio, and the mediocre quality of commonly used extensions to R, I can say that R is a problem. The question is, how did this situation arise? All of the components are open source. How did open source "allow" such poor quality?
Its a good question, and I don't have a definite answer. But I do have an idea.
We've been conditioned to think of open source as a way to develop quality projects from the commonly known successful open source projects: Linux, Perl, Python, Ruby, and OfficeLibre. These projects are well-respected and popular with many users. They have endured; each is over ten years old. (There are other well-respected open source projects, too.)
When we think of these projects, we see success and quality. Success and quality is the result of hard work, dedication, and a bit of luck. These projects had all of those elements. Open source, by itself is not enough to force a result of high quality.
These successful projects have been run by developers, and more importantly, for developers. That is certainly true of the early, formative years of Linux, and true for any open source programming language. I suspect that the people working on OfficeLibre are primarily developers.
I believe that this second concept does not hold for the R ecosystem. I suspect that the people working on the R language, the Rmd markdown format, and especially the people building the commonly used models are first and foremost data scientists and analysts, and developers second. They are building R for their needs as data scientists.
(I omit Rstudio from this list. It appears that Rstudio is built to be a commercial endeavor, which means that their developers are paid to be developers. It makes status messages in red even more embarrassing.)
I will note that the successful open source projects have had an individual as a strong leader for the project. (Linux has Linus Torvalds, Perl has Larry Wall, etc.) I don't see a strong individual leading the R ecosystem or any component. These projects are -- apparently -- run by committee, or built by separate teams. It is a very Jeffersonian approach to software development, one which may have an effect on the quality of the result. (Again, this is all an idea. I have not confirmed this.)
Where does this leave us?
First, am reluctant to trust any important work to R. There are too many "moving pieces" for my taste -- too many technologies, too many impediments to good code, too many things that can go wrong. The risks outweigh the benefits.
Second, in the long term, we may move away from R. The popularity of R is not the R language, it is the ability to create (or use) linear programming models. Someone will create a platform for analysis, with the ability to define and run linear programming models. It will "just work", to borrow a phrase from Apple.
Moving away from R and the current toolchain is not guaranteed. The current tools may have achieved a "critical mass" of acceptance in the data science community, and the cost of moving to a different toolchain may viewed as unacceptable. In that case, the data science community can look forward to decades of struggles with the tools.
The real lesson is that open source does not guarantee quality software. R and the models are open source, but the quality is... mediocre. High quality requires more than just open source. Open source may be, as the mathematicians say, "necessary but not sufficient". We should consider this when managing any of our projects. Starting a project as open source will not guarantee success, nor will converting an existing project to open source. A successful project needs more: capable management, good people, and a vision and definition of success.
Tuesday, February 12, 2019
Praise for Microsoft
I am not Microsoft's biggest fan. I disliked their products and strategies in the 1990s, when they had a virtual monopoly on desktop operating systems, office software, and development tools. Yet I must give them credit for two recent products: OneDrive and Visual Studio Code.
OneDrive
OneDrive synchronizes files across multiple devices. I can store a file in OneDrive on computer A and later retrieve it on computer B. OneDrive stores data on Microsoft's servers and associates it with my account. If I log in to a Windows computer with my ID and password, I can see all of my files on OneDrive. The files are not copied to the local computer, they are simply available for me to view, change, or delete.
OneDrive also provides storage for online services such as Office Online. This lets me use any computer, even a public one in a library. (I think. I have yet to try this. But it makes sense for Microsoft to do things this way.)
Visual Studio Code
The other product that deserves credit is Visual Studio Code.
Microsoft advertises Visual Studio Code as an editor, yet it is much more. It edits, color-highlights, checks syntax, refactors, debugs (at least with Python), and integrates with git. It has an impressive array of features in a small package. What is significant is that the features are just the right set -- at least for me, and I suspect a large number of developers. It is not weighed down with all of the features of Microsoft's classic Visual Studio package. Visual Studio Code omits the templates and the auto-generation. It replaces the package manager with a series of lightweight plug-ins. It seems to ignore Team Foundation Server (and services), although I could be mistaken about that. (Perhaps there is an enterprise version of VS Code that connects to TFS.)
Beyond the feature set, Visual Studio Code... works. It's a competent product, one that feels good to use. It has just enough to get the job done, and it gets the job done well. I feel comfortable using it. (And that's a rare thing with me and Microsoft products.)
Visual Studio Code is a departure from the traditional Microsoft approach to software. The old Microsoft built software for Windows -- and Windows only. (A few exceptions were made for Mac OS.) Visual Studio Code breaks from that tradition: it is available for Windows, Mac OS, and Linux. This is indeed a ground-breaking project.
OneDrive and Visual Studio Code make for a pleasant experience when developing code. Microsoft deserves credit for bold choices and good tools. If you have not tried them, I recommend that you do.
What have you got to lose?
OneDrive
OneDrive synchronizes files across multiple devices. I can store a file in OneDrive on computer A and later retrieve it on computer B. OneDrive stores data on Microsoft's servers and associates it with my account. If I log in to a Windows computer with my ID and password, I can see all of my files on OneDrive. The files are not copied to the local computer, they are simply available for me to view, change, or delete.
OneDrive also provides storage for online services such as Office Online. This lets me use any computer, even a public one in a library. (I think. I have yet to try this. But it makes sense for Microsoft to do things this way.)
Visual Studio Code
The other product that deserves credit is Visual Studio Code.
Microsoft advertises Visual Studio Code as an editor, yet it is much more. It edits, color-highlights, checks syntax, refactors, debugs (at least with Python), and integrates with git. It has an impressive array of features in a small package. What is significant is that the features are just the right set -- at least for me, and I suspect a large number of developers. It is not weighed down with all of the features of Microsoft's classic Visual Studio package. Visual Studio Code omits the templates and the auto-generation. It replaces the package manager with a series of lightweight plug-ins. It seems to ignore Team Foundation Server (and services), although I could be mistaken about that. (Perhaps there is an enterprise version of VS Code that connects to TFS.)
Beyond the feature set, Visual Studio Code... works. It's a competent product, one that feels good to use. It has just enough to get the job done, and it gets the job done well. I feel comfortable using it. (And that's a rare thing with me and Microsoft products.)
Visual Studio Code is a departure from the traditional Microsoft approach to software. The old Microsoft built software for Windows -- and Windows only. (A few exceptions were made for Mac OS.) Visual Studio Code breaks from that tradition: it is available for Windows, Mac OS, and Linux. This is indeed a ground-breaking project.
OneDrive and Visual Studio Code make for a pleasant experience when developing code. Microsoft deserves credit for bold choices and good tools. If you have not tried them, I recommend that you do.
What have you got to lose?
Labels:
Microsoft,
OneDrive,
Visual Studio,
Visual Studio Code
Tuesday, January 29, 2019
Intelligence, real and artificial
We humans have been working on artificial intelligence for a long time. At least fifty years, by my count, and through most of that time, true artificial intelligence has been consistently "twenty years away".
Of course, one should not talk solely about artificial intelligence. We humans have what we style as real intelligence. Perhaps the term "organic intelligence" is more appropriate, as human intelligence evolved organically over the ages. Let's not argue too much about terms.
The human mind is a strange thing. It is the only thing in the universe that can examine itself. (At least, it's the only one that we humans know about.)
There are many models of the human mind. We have studied the anatomy, the physiology, the chemistry, ... and we still understand little about how it works. Freud studied the human psyche (close enough to the mind for this essay) and Skinner studied animal behaviors with reward systems, and we still know little about the mind.
But there is one model that strikes me as useful when developing artificial intelligence: the notion of the human brain as two different but connected processors.
In this model, humans have not one but two processors: one slow and linear, the other fast and parallel. The slow, linear processor gives us analytical thought, math, logic, and language. The parallel side gives us intuition, using s a pattern-matching system.
The logical side is easy for us to examine. It is linear and relatively slow, and since it has language, it can talk to us. We can follow a chain of reasoning and understand how we arrive at an answer. (We can also explain our reasoning to another person, or write it down.)
The intuitive side is difficult to examine. It is parallel and relatively fast, and since it does not have language, it cannot explain how it arrives at an answer. We don't know why we get the results we get.
From an evolution point of view, it is easy to see how we developed the intuitive (pattern-matching) side. Our ancestors were successful when they identified a rabbit (and ate it) and identified a tiger (and ran away from it). Pattern matching is quite useful for survival.
It is less clear how we evolved the linear-logical side of our brain. Slow, analytic thought may be helpful for survival, but perhaps not as helpful as avoiding tigers. Communication is clearly a benefit when living in groups. No matter how it arose, we have it.
These two sides make up our brain. (Yes, I am aware that there are various levels of the brain, all the way down to the brain stem, but bear with me.)
Humans are successful, I believe, because we have both the logical and the intuitive processors. We use both brains in our everyday life, from recognizing other humans and breakfast cereal, and we think about business strategies and algebra homework. We pick the right processor for the problem at hand.
Now let's shift from our human intelligence to ... not artificial intelligence but computer intelligence, such as it is.
Traditional computing is our logic, math-oriented brains with a turbocharger. Computers are fast, and can perform calculations rapidly and reliably, but they don't have "common sense", or the intuition that we humans use. While fast, we can examine the program and understand how a computer arrives at a result.
Artificial intelligence, on the other hand, corresponds to the intuitive side of human intelligence. It can solve problems (relatively quickly) many times through pattern-matching techniques. And, just as the human intuitive, pattern-matching brain cannot explain how it arrives at a result, neither can artificial intelligence systems. We cannot simple examine the program and look at some variables to understand how the result was determined.
So now we have two artificial systems, one logical and one intuitive systems. These two types of "intelligence" are the two types in humans.
The real advance will be to combine the traditional computing systems (the logical systems) with artificial intelligence (the pattern-matching systems), just as our brains combine the two. Bringing the two disparate systems into one will be necessary for true, Skynet-class, Forbin-class, HAL-9000-class, artificial intelligence.
I expect that joining the two will be quite the challenge. We understand little about our human brains and how the logical and intuitive processors coordinate their work. Getting the logical and intuitive computer systems to work together will be (I think) a long effort.
But when we get it -- watch out!
Of course, one should not talk solely about artificial intelligence. We humans have what we style as real intelligence. Perhaps the term "organic intelligence" is more appropriate, as human intelligence evolved organically over the ages. Let's not argue too much about terms.
The human mind is a strange thing. It is the only thing in the universe that can examine itself. (At least, it's the only one that we humans know about.)
There are many models of the human mind. We have studied the anatomy, the physiology, the chemistry, ... and we still understand little about how it works. Freud studied the human psyche (close enough to the mind for this essay) and Skinner studied animal behaviors with reward systems, and we still know little about the mind.
But there is one model that strikes me as useful when developing artificial intelligence: the notion of the human brain as two different but connected processors.
In this model, humans have not one but two processors: one slow and linear, the other fast and parallel. The slow, linear processor gives us analytical thought, math, logic, and language. The parallel side gives us intuition, using s a pattern-matching system.
The logical side is easy for us to examine. It is linear and relatively slow, and since it has language, it can talk to us. We can follow a chain of reasoning and understand how we arrive at an answer. (We can also explain our reasoning to another person, or write it down.)
The intuitive side is difficult to examine. It is parallel and relatively fast, and since it does not have language, it cannot explain how it arrives at an answer. We don't know why we get the results we get.
From an evolution point of view, it is easy to see how we developed the intuitive (pattern-matching) side. Our ancestors were successful when they identified a rabbit (and ate it) and identified a tiger (and ran away from it). Pattern matching is quite useful for survival.
It is less clear how we evolved the linear-logical side of our brain. Slow, analytic thought may be helpful for survival, but perhaps not as helpful as avoiding tigers. Communication is clearly a benefit when living in groups. No matter how it arose, we have it.
These two sides make up our brain. (Yes, I am aware that there are various levels of the brain, all the way down to the brain stem, but bear with me.)
Humans are successful, I believe, because we have both the logical and the intuitive processors. We use both brains in our everyday life, from recognizing other humans and breakfast cereal, and we think about business strategies and algebra homework. We pick the right processor for the problem at hand.
Now let's shift from our human intelligence to ... not artificial intelligence but computer intelligence, such as it is.
Traditional computing is our logic, math-oriented brains with a turbocharger. Computers are fast, and can perform calculations rapidly and reliably, but they don't have "common sense", or the intuition that we humans use. While fast, we can examine the program and understand how a computer arrives at a result.
Artificial intelligence, on the other hand, corresponds to the intuitive side of human intelligence. It can solve problems (relatively quickly) many times through pattern-matching techniques. And, just as the human intuitive, pattern-matching brain cannot explain how it arrives at a result, neither can artificial intelligence systems. We cannot simple examine the program and look at some variables to understand how the result was determined.
So now we have two artificial systems, one logical and one intuitive systems. These two types of "intelligence" are the two types in humans.
The real advance will be to combine the traditional computing systems (the logical systems) with artificial intelligence (the pattern-matching systems), just as our brains combine the two. Bringing the two disparate systems into one will be necessary for true, Skynet-class, Forbin-class, HAL-9000-class, artificial intelligence.
I expect that joining the two will be quite the challenge. We understand little about our human brains and how the logical and intuitive processors coordinate their work. Getting the logical and intuitive computer systems to work together will be (I think) a long effort.
But when we get it -- watch out!
Wednesday, January 23, 2019
Apple improves, but does not invent
That headline is a bit strong, and fighting words to Apple devotees.
My point is that Apple examines a market, finds a product, and builds a better version. It has built the proverbial better mousetrap, and the world has beaten a path to Apple's door.
Let's look at the history.
The iPhone, Apple's signature and most successful product, was (and is) a hand-held computer that can act as a cell phone. The iPhone was not the first cell phone, nor was it the first hand-held computer. Prior to the iPhone we had the Palm series of computers, and Microsoft's attempt at hand-held computers with a stripped-down version of Windows named, unfortunately, WinCE. (And "wince" is how many us us had to look at the low-resolution screens of the Windows hand-held computers.)
Apple looked at the market of hand-held computers and built a better version. It just happened to have a phone.
Apple's success goes beyond the iPhone.
The iPod was Apple's music player. It was not the first music player, but it was better than existing models. The physical design was better, but more importantly, the iTunes software that let people easily (and consistently) purchase music (rather than download from shady web sites) made the iPod a success.
Apple looked at the market of music players and music downloads and built a better version.
The MacBook Air was (and still is) Apple's design for a laptop computer. Slim, light, and capable, it was better than the competing laptops of the time. Manufacturers have now adopted the MacBook design into their own product lines.
Apple looked at the market of laptop computers and built a better version.
The Apple watch is a product with modest success. It was not the first digital watch. Its connectivity to the iPhone makes it easy to use and more capable than other digital watches, including those with apps that you install on your phone.
Apple looked at the market of digit watches and built a better version.
The Macintosh was Apple's second major success in the computer market. As with the iPod, the hardware was good and the software was better. It was the operating system, with its graphical interface, that made the Macintosh a success. The "Mac" was easier to use than PCs or clones running PC-DOS.
Apple looked at the market of desktop computers (and operating systems) and built a better version.
The history shows that Apple does not invent successful products, but instead improves on existing products. Apple often brings disparate elements together (the iPhone was a combination of cell phone and hand-held computer, the Macintosh was a combination of PC and graphical operating system) into a usable design. That's a good skill, and somewhat rare. But it has its weakness. (More on that later.)
Apple can invent; they designed and built the Apple II in the early microcomputer days. While the Apple II was not the first home computer, it was among the first. The Apple II was built for the consumer market, much like a television. Apple invented the home computer -- perhaps at the same time as others -- and deserves credit for its work.
The problem with Apple's strategy (building better mousetraps) is that there must be mousetraps (of the non-better kind) to begin with. One cannot build the better mousetrap until mousetraps exist, and one has ideas for a better version of the mousetrap.
The challenge for Apple, now, is to find a market and build a better product. I'm not sure where Apple can go. There is AI, which is a nascent market with a few early products (much like the microcomputer market before the IBM PC) but it does not fit well into Apple's overall strategy of selling hardware.
Another possibility is virtual reality and augmented reality. Microsoft offers the HoloLens heads-up display, which is admired but not used all that much. (A few games and experimental applications, but no "killer" app.) Apple could design and sell their own although heads-up display, but they may encounter the same dearth of applications that stymies Microsoft. Apple would have to create (or entice others to create) content. It could be done, but it doesn't fit the historical pattern.
Apple could move into games with a gaming console of their own. There are already games available, and the market has a ready set of customers. The competition is stiff, and Apple will have a challenge to design the "better mousetrap" of a game console, and convince game producers to create versions for Apple's console. Also, gamers -- serious players -- like to modify and enhance their hardware. Apple products are usually sealed shut and closed to modification.
Self-driving cars? The idea has been discussed. But self-driving cars are not commercially available. Apple likes to let others develop the first products and then bring on a better mousetrap.
Apple has avoided the cloud services market. Apple may use cloud technologies for things like Siri and backup, but they don't sell services the way Amazon and Microsoft do. (Mostly, I think, because cloud services move processing off the local device, and Apple wants to sell expensive local devices.)
As tablet sales decline and phone sales plateau, Apple has some interesting challenges. I don't know where they are going to go. (Which means I will be surprised when they do.) I'm not bearish on Apple -- I think they have a bright future -- and I hope to be pleasantly surprised.
My point is that Apple examines a market, finds a product, and builds a better version. It has built the proverbial better mousetrap, and the world has beaten a path to Apple's door.
Let's look at the history.
The iPhone, Apple's signature and most successful product, was (and is) a hand-held computer that can act as a cell phone. The iPhone was not the first cell phone, nor was it the first hand-held computer. Prior to the iPhone we had the Palm series of computers, and Microsoft's attempt at hand-held computers with a stripped-down version of Windows named, unfortunately, WinCE. (And "wince" is how many us us had to look at the low-resolution screens of the Windows hand-held computers.)
Apple looked at the market of hand-held computers and built a better version. It just happened to have a phone.
Apple's success goes beyond the iPhone.
The iPod was Apple's music player. It was not the first music player, but it was better than existing models. The physical design was better, but more importantly, the iTunes software that let people easily (and consistently) purchase music (rather than download from shady web sites) made the iPod a success.
Apple looked at the market of music players and music downloads and built a better version.
The MacBook Air was (and still is) Apple's design for a laptop computer. Slim, light, and capable, it was better than the competing laptops of the time. Manufacturers have now adopted the MacBook design into their own product lines.
Apple looked at the market of laptop computers and built a better version.
The Apple watch is a product with modest success. It was not the first digital watch. Its connectivity to the iPhone makes it easy to use and more capable than other digital watches, including those with apps that you install on your phone.
Apple looked at the market of digit watches and built a better version.
The Macintosh was Apple's second major success in the computer market. As with the iPod, the hardware was good and the software was better. It was the operating system, with its graphical interface, that made the Macintosh a success. The "Mac" was easier to use than PCs or clones running PC-DOS.
Apple looked at the market of desktop computers (and operating systems) and built a better version.
The history shows that Apple does not invent successful products, but instead improves on existing products. Apple often brings disparate elements together (the iPhone was a combination of cell phone and hand-held computer, the Macintosh was a combination of PC and graphical operating system) into a usable design. That's a good skill, and somewhat rare. But it has its weakness. (More on that later.)
Apple can invent; they designed and built the Apple II in the early microcomputer days. While the Apple II was not the first home computer, it was among the first. The Apple II was built for the consumer market, much like a television. Apple invented the home computer -- perhaps at the same time as others -- and deserves credit for its work.
The problem with Apple's strategy (building better mousetraps) is that there must be mousetraps (of the non-better kind) to begin with. One cannot build the better mousetrap until mousetraps exist, and one has ideas for a better version of the mousetrap.
The challenge for Apple, now, is to find a market and build a better product. I'm not sure where Apple can go. There is AI, which is a nascent market with a few early products (much like the microcomputer market before the IBM PC) but it does not fit well into Apple's overall strategy of selling hardware.
Another possibility is virtual reality and augmented reality. Microsoft offers the HoloLens heads-up display, which is admired but not used all that much. (A few games and experimental applications, but no "killer" app.) Apple could design and sell their own although heads-up display, but they may encounter the same dearth of applications that stymies Microsoft. Apple would have to create (or entice others to create) content. It could be done, but it doesn't fit the historical pattern.
Apple could move into games with a gaming console of their own. There are already games available, and the market has a ready set of customers. The competition is stiff, and Apple will have a challenge to design the "better mousetrap" of a game console, and convince game producers to create versions for Apple's console. Also, gamers -- serious players -- like to modify and enhance their hardware. Apple products are usually sealed shut and closed to modification.
Self-driving cars? The idea has been discussed. But self-driving cars are not commercially available. Apple likes to let others develop the first products and then bring on a better mousetrap.
Apple has avoided the cloud services market. Apple may use cloud technologies for things like Siri and backup, but they don't sell services the way Amazon and Microsoft do. (Mostly, I think, because cloud services move processing off the local device, and Apple wants to sell expensive local devices.)
As tablet sales decline and phone sales plateau, Apple has some interesting challenges. I don't know where they are going to go. (Which means I will be surprised when they do.) I'm not bearish on Apple -- I think they have a bright future -- and I hope to be pleasantly surprised.
Thursday, January 10, 2019
Predictions for tech in 2019
Predictions are fun! They allow us to see into the future -- or at least claim that we can see into the future. They also allow us to step away from the usual topics and talk about almost anything we want. Who could resist making predictions?
So here are my predictions for 2019:
Programming languages: The current "market" for programming languages is fractured. There is no one language that dominates. The ten most popular languages (according to Tiobe) are Java, C, Python, C++, VB.NET, C#, JavaScript, PHP, SQL, and Objective-C. The top ten are not evenly distributed; Java and C are in a "lead group" and the remaining languages are in a second group.
O'Reilly lists Python, Java, Go, C#, Kotlin, and Rust as languages to watch for 2019. Notice that this list is different from Tiobe's "most popular" -- Rust and Kotlin show on that index in positions 34 and 36, respectively. Notably absent from O"Reilly's list are C++ and Perl.
For 2019, I predict that the market will remain fragmented. Java will remain in the lead group unless Oracle, who owns Java, does something that discourages Java development. (And even then, so many systems are currently written in Java that Java will remain in use for years. Java will be the COBOL of the 2020s: used in important business systems but not liked very much by younger developers.) C will remain in the lead group. (The popularity of C is hard to explain. But whatever C has, people like.)
Fragmentation makes life difficult for managers. Which languages should their teams use? A single leader makes the decision easy. The current market, with multiple capable languages allows for debates about development languages. An established project provides the argument of sticking with the current programming language; a new project (with no existing code) makes the decision somewhat harder. (My advice: pick a popular language that gets the job done for you. Don't worry about it being the best language. Good enough is... good enough.)
Operating systems: Unlike the "market" for programming languages, the "market" for operating systems is fairly uniform. I should say "markets": we can consider the desktop/laptop segment, the server segment, and possibly a cloud segment. For the desktop, Windows is dominant, and will remain dominant in 2019. Windows 10 is capable and especially good for large organizations who want centralized administration. MacOS is used in a number of shops, especially smaller organizations and startups, and will continue to have a modest share.
For servers, Linux dominates and will continue to dominate in 2019. Windows runs some servers, and will continue to, especially in organizations who consider themselves "Microsoft shops".
The interesting future for operating systems is the cloud segment. Cloud services run on operating systems, usually Linux or Windows, but this is changing on two fronts. The first is the hypervisor, which sits below the virtual operating system in a cloud environment; the second is containers, which sit above the virtual operating system (and which contain an application).
Hypervisors are well-understood and well established. Containers are new (well, new-ish) and not as well understood, but gaining acceptance. Between the two sits the operating system, which is coming under pressure as hypervisors and containers perform tasks that were traditionally performed by operating systems.
In the long run, hypervisors, containers, and operating systems will achieve a new equilibrium, with operating systems doing less than they have in the past. The question will not be "Which operating system for my cloud application?" but instead "Which combination of hypervisor, operating system, and container for my cloud application?". And even then, there may be large shops that use a mixture of hypervisor, operating system, and container for their applications.
Virtual reality and augmented reality: Both will remain experimental. We have yet to find a "killer app" for augmented reality, something that combines real-world and supplied visuals in a compelling application.
Cloud services: Amazon.com dominates the market, and I see little to change that. Microsoft and Google will maintain (and possibly increase) their market shares. Other players (IBM, Dell, Oracle) will remain small.
The list of services available from cloud providers is impressive and daunting. Amazon is in a difficult position; its services are less consistent than Microsoft's and Google's. Both Microsoft and Google came into the market after Amazon and developed their offerings more slowly. The result has been a smaller market share but a more consistent set of services (and, I dare say, a better experience for the customer). Amazon may change some services to make things more consistent.
Phones: Little will change in 2019. Apple and Android will remain dominant. 5G will get press and slow roll-out by carriers; look for true implementation and wide coverage in later years.
Tablets: 2019 may be the "last year of the tablet" -- at least the non-laptop convertible tablet. Tablet sales have been anemic, except for iPads, and even those are declining. Apple could introduce an innovation to the iPad which increases its appeal, but I don't see that. (I think Apple will focus on phones, watches, earphones, and other consumer devices.)
I see little interest in tablets from other manufacturers, probably due to the lack of demand by customers. As Android is the only other (major) operating system for tablets, innovation for Android tablets will have to come from Google, and I see little interest from Google in tablets. (I think Google is more interested in phones, location-based services, and advertising.)
In sum, I see 2019 as a year of "more of the same", with few or no major innovations. I suspect that the market for tech will, at the end of 2019, look very much like the market for tech at the beginning of 2019.
So here are my predictions for 2019:
Programming languages: The current "market" for programming languages is fractured. There is no one language that dominates. The ten most popular languages (according to Tiobe) are Java, C, Python, C++, VB.NET, C#, JavaScript, PHP, SQL, and Objective-C. The top ten are not evenly distributed; Java and C are in a "lead group" and the remaining languages are in a second group.
O'Reilly lists Python, Java, Go, C#, Kotlin, and Rust as languages to watch for 2019. Notice that this list is different from Tiobe's "most popular" -- Rust and Kotlin show on that index in positions 34 and 36, respectively. Notably absent from O"Reilly's list are C++ and Perl.
For 2019, I predict that the market will remain fragmented. Java will remain in the lead group unless Oracle, who owns Java, does something that discourages Java development. (And even then, so many systems are currently written in Java that Java will remain in use for years. Java will be the COBOL of the 2020s: used in important business systems but not liked very much by younger developers.) C will remain in the lead group. (The popularity of C is hard to explain. But whatever C has, people like.)
Fragmentation makes life difficult for managers. Which languages should their teams use? A single leader makes the decision easy. The current market, with multiple capable languages allows for debates about development languages. An established project provides the argument of sticking with the current programming language; a new project (with no existing code) makes the decision somewhat harder. (My advice: pick a popular language that gets the job done for you. Don't worry about it being the best language. Good enough is... good enough.)
Operating systems: Unlike the "market" for programming languages, the "market" for operating systems is fairly uniform. I should say "markets": we can consider the desktop/laptop segment, the server segment, and possibly a cloud segment. For the desktop, Windows is dominant, and will remain dominant in 2019. Windows 10 is capable and especially good for large organizations who want centralized administration. MacOS is used in a number of shops, especially smaller organizations and startups, and will continue to have a modest share.
For servers, Linux dominates and will continue to dominate in 2019. Windows runs some servers, and will continue to, especially in organizations who consider themselves "Microsoft shops".
The interesting future for operating systems is the cloud segment. Cloud services run on operating systems, usually Linux or Windows, but this is changing on two fronts. The first is the hypervisor, which sits below the virtual operating system in a cloud environment; the second is containers, which sit above the virtual operating system (and which contain an application).
Hypervisors are well-understood and well established. Containers are new (well, new-ish) and not as well understood, but gaining acceptance. Between the two sits the operating system, which is coming under pressure as hypervisors and containers perform tasks that were traditionally performed by operating systems.
In the long run, hypervisors, containers, and operating systems will achieve a new equilibrium, with operating systems doing less than they have in the past. The question will not be "Which operating system for my cloud application?" but instead "Which combination of hypervisor, operating system, and container for my cloud application?". And even then, there may be large shops that use a mixture of hypervisor, operating system, and container for their applications.
Virtual reality and augmented reality: Both will remain experimental. We have yet to find a "killer app" for augmented reality, something that combines real-world and supplied visuals in a compelling application.
Cloud services: Amazon.com dominates the market, and I see little to change that. Microsoft and Google will maintain (and possibly increase) their market shares. Other players (IBM, Dell, Oracle) will remain small.
The list of services available from cloud providers is impressive and daunting. Amazon is in a difficult position; its services are less consistent than Microsoft's and Google's. Both Microsoft and Google came into the market after Amazon and developed their offerings more slowly. The result has been a smaller market share but a more consistent set of services (and, I dare say, a better experience for the customer). Amazon may change some services to make things more consistent.
Phones: Little will change in 2019. Apple and Android will remain dominant. 5G will get press and slow roll-out by carriers; look for true implementation and wide coverage in later years.
Tablets: 2019 may be the "last year of the tablet" -- at least the non-laptop convertible tablet. Tablet sales have been anemic, except for iPads, and even those are declining. Apple could introduce an innovation to the iPad which increases its appeal, but I don't see that. (I think Apple will focus on phones, watches, earphones, and other consumer devices.)
I see little interest in tablets from other manufacturers, probably due to the lack of demand by customers. As Android is the only other (major) operating system for tablets, innovation for Android tablets will have to come from Google, and I see little interest from Google in tablets. (I think Google is more interested in phones, location-based services, and advertising.)
In sum, I see 2019 as a year of "more of the same", with few or no major innovations. I suspect that the market for tech will, at the end of 2019, look very much like the market for tech at the beginning of 2019.
Labels:
cloud,
desktop,
phones,
predictions,
predictions for the year,
servers,
tablets
Thursday, December 6, 2018
Rebels need the Empire
The PC world is facing a crisis. It is a silent crisis, one that few people understand.
That crisis is the evil empire, or more specifically, the lack of an evil empire.
For the entire age of personal computers, we have had an evil empire. The empire changed over time, but there was always one. And that empire was the unifying force for the rebellion.
The first empire was IBM. Microcomputer enthusiasts were fighting this empire of large, expensive mainframe computers. We fought it with small, inexpensive (compared to mainframes) computers. We offered small, interactive, "friendly" programs written in BASIC in opposition to batch mainframe systems written in COBOL. The rebellion used Apple II, TRS-80, and other small systems to unite and fight for liberty. This rebellion was successful. So successful that IBM decided to get in on the personal computer action.
The second empire was also IBM. The IBM PC became the standard for computing, and the diverse set of computers prior to the IBM model 5150 was wiped out. Rebels refused to use IBM PCs and attempted to keep non-PC-compatible computers financially viable. That struggle was lost, and the IBM design became the standard design. Once Compaq introduced a PC-compatible (and didn't get sued) other manufacturers introduced their own PC compatibles. The one remnant of this rebellion was Apple, who made non-compatible computers for quite some time.
The third empire was Microsoft. The makers of IBM-compatible PCs needed an operating system and Microsoft was happy to sell them MS-DOS. IBM challenged Microsoft with OS/2 (itself a joint venture with Microsoft) but Microsoft introduced Windows and made it successful. Microsoft was so successful that its empire was, at times, considered larger and grander than IBM mainframe empire. The rebellion against Microsoft took some time to form, but it did arise as the "open source" movement.
But Microsoft has fallen from its position as evil empire. It still holds a majority of desktop computer operating systems, but the world of computing has expanded to web servers, smartphones, and cloud systems, and these are outside of Microsoft's control.
In tandem with Microsoft's decline, open source has become accepted as the norm. As such, it is no longer the rebellion. The software market exists in tripartite: Windows, macOS, and Linux. Each is an acceptable solution.
Those two changes -- Microsoft no longer the evil empire and open source no longer the rebellion -- mean that, at the moment, there is no evil empire.
Some companies have large market shares of certain segments. Amazon.com dominates the web services and cloud market -- but competitors are reasonable and viable options. Microsoft dominates the desktop market, especially the corporate desktop market, but Apple is a possible choice for the corporate desktop.
No one vendor controls the hardware market.
Facebook dominates in social media, but is facing significant challenges in areas of privacy and "fake news". Other media channels like Twitter are looking to gain at Facebook's expense.
Even programming languages have no dominant player. According to the November report from Tiobe, Java and C have been the two most popular languages and neither is gaining significantly. The next three (C++, Python, and VB.net) are close, as are the five following (C#, JavaScript, PHP, SQL, and Go). No language is emerging as a dominant language, as we had with BASIC in the 1980s and Visual Basic in the 1990s.
A world without an evil empire is a new world for us. Personal computers were born under an evil empire, operating systems matured under an evil empire, and open source became respectable under an evil empire. I like to think that such innovations were driven (or at least inspired) by a rebellion, an active group of people who rejected the market leader.
Today we have no such empire. Will innovation continue without one? Will we see new hardware, new programming languages, new tools? Or will the industry stagnate as major plays focus more on market share and less on innovation?
If the latter, then perhaps someday a new market leader will emerge, strong enough to win the title of "evil empire" and rebels will again drive innovation.
That crisis is the evil empire, or more specifically, the lack of an evil empire.
For the entire age of personal computers, we have had an evil empire. The empire changed over time, but there was always one. And that empire was the unifying force for the rebellion.
The first empire was IBM. Microcomputer enthusiasts were fighting this empire of large, expensive mainframe computers. We fought it with small, inexpensive (compared to mainframes) computers. We offered small, interactive, "friendly" programs written in BASIC in opposition to batch mainframe systems written in COBOL. The rebellion used Apple II, TRS-80, and other small systems to unite and fight for liberty. This rebellion was successful. So successful that IBM decided to get in on the personal computer action.
The second empire was also IBM. The IBM PC became the standard for computing, and the diverse set of computers prior to the IBM model 5150 was wiped out. Rebels refused to use IBM PCs and attempted to keep non-PC-compatible computers financially viable. That struggle was lost, and the IBM design became the standard design. Once Compaq introduced a PC-compatible (and didn't get sued) other manufacturers introduced their own PC compatibles. The one remnant of this rebellion was Apple, who made non-compatible computers for quite some time.
The third empire was Microsoft. The makers of IBM-compatible PCs needed an operating system and Microsoft was happy to sell them MS-DOS. IBM challenged Microsoft with OS/2 (itself a joint venture with Microsoft) but Microsoft introduced Windows and made it successful. Microsoft was so successful that its empire was, at times, considered larger and grander than IBM mainframe empire. The rebellion against Microsoft took some time to form, but it did arise as the "open source" movement.
But Microsoft has fallen from its position as evil empire. It still holds a majority of desktop computer operating systems, but the world of computing has expanded to web servers, smartphones, and cloud systems, and these are outside of Microsoft's control.
In tandem with Microsoft's decline, open source has become accepted as the norm. As such, it is no longer the rebellion. The software market exists in tripartite: Windows, macOS, and Linux. Each is an acceptable solution.
Those two changes -- Microsoft no longer the evil empire and open source no longer the rebellion -- mean that, at the moment, there is no evil empire.
Some companies have large market shares of certain segments. Amazon.com dominates the web services and cloud market -- but competitors are reasonable and viable options. Microsoft dominates the desktop market, especially the corporate desktop market, but Apple is a possible choice for the corporate desktop.
No one vendor controls the hardware market.
Facebook dominates in social media, but is facing significant challenges in areas of privacy and "fake news". Other media channels like Twitter are looking to gain at Facebook's expense.
Even programming languages have no dominant player. According to the November report from Tiobe, Java and C have been the two most popular languages and neither is gaining significantly. The next three (C++, Python, and VB.net) are close, as are the five following (C#, JavaScript, PHP, SQL, and Go). No language is emerging as a dominant language, as we had with BASIC in the 1980s and Visual Basic in the 1990s.
A world without an evil empire is a new world for us. Personal computers were born under an evil empire, operating systems matured under an evil empire, and open source became respectable under an evil empire. I like to think that such innovations were driven (or at least inspired) by a rebellion, an active group of people who rejected the market leader.
Today we have no such empire. Will innovation continue without one? Will we see new hardware, new programming languages, new tools? Or will the industry stagnate as major plays focus more on market share and less on innovation?
If the latter, then perhaps someday a new market leader will emerge, strong enough to win the title of "evil empire" and rebels will again drive innovation.
Labels:
evil empire,
IBM,
innovation,
Microsoft,
open source
Subscribe to:
Posts (Atom)