The news that IBM had an agreement to purchase Red Hat (the distributor and supporter of a Linux distro for commercial use) was followed quickly by a series of comments from the tech world, ranging from anger to disappointment.
I'm not sure that the purchase of Red Hat by IBM is a bad thing.
One can view this event in the form of two questions. The first is "Should Red Hat sell itself (to anyone)?". The second is "Given that Red Hat is for sale, who would be a good purchaser?".
The negative reaction, I think, is mostly about the first question. People are disappointed (or angered) by the sale of Red Had -- to anyone.
But once you commit to a sale, the question changes and the focus is on the buyer. Who are possible buyers for Red Hat?
IBM is, of course, a possibility. Many people might object to IBM, and if we think of the IBM from its monopoly days and its arrogance and incompatible hardware designs, then IBM would be a poor choice. (Red Hat would also be a poor acquisition for that IBM, too.)
But IBM has changed quite a bit. It still sells mainframes; its S/36 line has mutated into servers, and it has sold (long ago) its PC business. It must compete in the cloud arena with Amazon.com, Microsoft, and Google (and Dell, and Oracle, and others). Red Hat helps IBM in this area. I think IBM is not so foolish as to break Red Hat or make many changes.
One possibility is that IBM purchased Red Hat to prevent others from doing so. (You buy something because you need it or because you want to keep it from others.) Who are the others?
Amazon.com and Microsoft come quickly to mind. They both offer cloud services, and Red Hat would help both with their offerings. The complainers may consider this; would they prefer Red Hat to go to Amazon or Microsoft? (Of the two, I think Microsoft would be the better owner. It is expanding its role with Linux and moving its business away from Windows and Windows-only software to a larger market of cloud services that support both Windows and Linux.)
There are other possible purchasers. Oracle has been mentioned by critics (usually as a "could be worse, could be Oracle" comment). Red Hat fills a gap in Oracle's product line between hardware and its database software, and also provides a platform for Java (another Oracle property).
Beyond those, there are Facebook, Dell, and possibly Intel, although I consider the last to be a long shot. None of them strike me as a good partner.
Red Hat could be purchased by an equity/investment company, which would probably doom Red Hat to partitioning and sales of individual components.
In the end, IBM seems quite a reasonable purchaser. IBM has changed from its old ways and it supports Linux quite a bit. I think it will recognize value and strive to keep it. Let's see what happens.
Monday, October 29, 2018
Tuesday, October 23, 2018
It won't be Eiffel
Bertrand Meyers has made the case, repeatedly, for design-by-contract as a way to improve the quality of computer programs. He has been doing so for the better part of two decades, and perhaps longer.
Design-by-contract is a notion that uses preconditions, postconditions, and invariants in object-oriented programs. Each is a form of an assertion, a test that is performed at a specific time. (Preconditions are performed prior to a function, postconditions after, etc.)
Design-by-contract is a way of ensuring that programs are correct. It adds rigor to programs, and requires careful analysis and thought in the design of software. (Much like structured programming required analysis and thought for the design of software.)
I think it has a good chance of being accepted as a standard programming practice. It follows the improvements we have seen in programming languages: Bounds checking of indexes for arrays, function signatures, and type checking rules for casting from one type to another.
Someone will create a language that uses the design-by-contract concepts, and the language will gain popularity. Perhaps because of the vendor (Microsoft? Google?) or perhaps through grass-roots acceptance (a la Python).
There already is a language that implements design-by-contract: Eiffel, Meyers' pet language. It is available today, even for Linux, so developers can experiment with it. Yet it has little interest. The Eiffel language does not appear on the Tiobe index (at least not for September 2018) at all -- not only not in the top 20, but not in the top 100. (It may be lurking somewhere below that.)
So while I think that design-by-contract may succeed in the future, I also think that Eiffel has missed its opportunity. It hasn't been accepted by any of the big vendors (Microsoft, Oracle, Google, Apple) and its popularity remains low.
I think that another language may pick up the notion of preconditions and postconditions. The term "Design by contract" is trademarked by Meyers, so it is doubtful that another language will use it. But the term is not important -- it is the assertions that bring the rigor to programming. These are useful, and eventually will be found valuable by the development community.
At that point, multiple languages will support preconditions and postconditions. There will be new languages with the feature, and adaptations of existing languages (C++, Java, C#, and others) that sport preconditions and postconditions. So Bertrand Meyer will have "won" in the sense that his ideas were adopted.
But Eiffel, the language, will be left out.
Design-by-contract is a notion that uses preconditions, postconditions, and invariants in object-oriented programs. Each is a form of an assertion, a test that is performed at a specific time. (Preconditions are performed prior to a function, postconditions after, etc.)
Design-by-contract is a way of ensuring that programs are correct. It adds rigor to programs, and requires careful analysis and thought in the design of software. (Much like structured programming required analysis and thought for the design of software.)
I think it has a good chance of being accepted as a standard programming practice. It follows the improvements we have seen in programming languages: Bounds checking of indexes for arrays, function signatures, and type checking rules for casting from one type to another.
Someone will create a language that uses the design-by-contract concepts, and the language will gain popularity. Perhaps because of the vendor (Microsoft? Google?) or perhaps through grass-roots acceptance (a la Python).
There already is a language that implements design-by-contract: Eiffel, Meyers' pet language. It is available today, even for Linux, so developers can experiment with it. Yet it has little interest. The Eiffel language does not appear on the Tiobe index (at least not for September 2018) at all -- not only not in the top 20, but not in the top 100. (It may be lurking somewhere below that.)
So while I think that design-by-contract may succeed in the future, I also think that Eiffel has missed its opportunity. It hasn't been accepted by any of the big vendors (Microsoft, Oracle, Google, Apple) and its popularity remains low.
I think that another language may pick up the notion of preconditions and postconditions. The term "Design by contract" is trademarked by Meyers, so it is doubtful that another language will use it. But the term is not important -- it is the assertions that bring the rigor to programming. These are useful, and eventually will be found valuable by the development community.
At that point, multiple languages will support preconditions and postconditions. There will be new languages with the feature, and adaptations of existing languages (C++, Java, C#, and others) that sport preconditions and postconditions. So Bertrand Meyer will have "won" in the sense that his ideas were adopted.
But Eiffel, the language, will be left out.
Tuesday, October 9, 2018
C without the preprocessor
The C and C++ languages lack one utility that is found in many other languages: a package manager. Will they ever have one?
The biggest challenge to a package manager for C or C++ is not the package manager. We know how to build them, how to manage them, and how to maintain a community that uses them. Perl, Python, and Ruby have package managers. Java has one (sort of). C# has one. JavaScript has several! Why not C and C++?
The issue isn't in the C and C++ languages. Instead the issue is in the preprocessor, an external utility that modifies C or C++ code before the compiler does its work.
The problem with the preprocessor is that it can change just about any token in the code to something else, including statements which would be used by package managers. The preprocessor can change "do_this" to "do_that" or change "true" to "TRUE" or change "BEGIN" to "{".
The idea of a package manager for C and C++ has been discussed, and someone (I forget the person now) listed a number of questions that the preprocessor raises for a package manager. I won't repeat the list here, but they were very good questions.
To me, it seems that a package manager and a preprocessor are incompatible. If you have one, you cannot have the other. (At least, not with any degree of consistency.)
So I started thinking... what if we eliminate the C/C++ preprocessor? How would that change the languages?
Let's look at what the preprocessor does for us.
For starters, it is the mechanism to include headers in programs. The "#include" lines are handled by the preprocessor, not the compiler. (When C was first designed, a preprocessor was considered a "win", as it separated some tasks from the compiler and followed the Unix philosophy of separation of duties.) We still need a way to include definitions of constants, functions, structures, and classes, so we need a replacement for the #include command.
A side note: C and C++ standard wonks will know that it is not required that the preprocessor and not the compiler handle "#include" lines. The standards dictate that after certain lines (such as #include "string") the compiler must exhibit certain behaviors. But this bit of arcane knowledge is not important to the general idea of elminating the preprocessor.
The preprocessor allows for conditional compilation. It allows for "#if/#else/#endif" blocks that can be conditionally compiled, based on what follows the "#if". Conditional compilation is extremely useful on software that has multiple targets, such as the Linux kernel (which targets many different processors).
The preprocessor also allows for macros and substitution of values. It accepts a "#define" line which can change any token into something else. This mechanism was used for the "max()" and "min()" functions.
All of that would be lost with the elimination of the preprocessor. As all of those features are used on many projects, they would all have to be replaced by some form of extension to the compiler. The compiler would have to read the included files, and would have to compile (or not compile) conditionally-marked code.
Such a change is possible, but not easy. It would probably break a lot of existing code -- perhaps all nontrivial C and C++ programs.
Which means that removing the preprocessor from C and C++ and replacing it with something else is a change to the language that makes C and C++ no longer C and C++. Removing the preprocessor changes the languages. They are no longer C and C++, but different languages, and deserving of different names.
So in once sense you can remove the preprocessor from C and C++, but in another sense you cannot.
The biggest challenge to a package manager for C or C++ is not the package manager. We know how to build them, how to manage them, and how to maintain a community that uses them. Perl, Python, and Ruby have package managers. Java has one (sort of). C# has one. JavaScript has several! Why not C and C++?
The issue isn't in the C and C++ languages. Instead the issue is in the preprocessor, an external utility that modifies C or C++ code before the compiler does its work.
The problem with the preprocessor is that it can change just about any token in the code to something else, including statements which would be used by package managers. The preprocessor can change "do_this" to "do_that" or change "true" to "TRUE" or change "BEGIN" to "{".
The idea of a package manager for C and C++ has been discussed, and someone (I forget the person now) listed a number of questions that the preprocessor raises for a package manager. I won't repeat the list here, but they were very good questions.
To me, it seems that a package manager and a preprocessor are incompatible. If you have one, you cannot have the other. (At least, not with any degree of consistency.)
So I started thinking... what if we eliminate the C/C++ preprocessor? How would that change the languages?
Let's look at what the preprocessor does for us.
For starters, it is the mechanism to include headers in programs. The "#include" lines are handled by the preprocessor, not the compiler. (When C was first designed, a preprocessor was considered a "win", as it separated some tasks from the compiler and followed the Unix philosophy of separation of duties.) We still need a way to include definitions of constants, functions, structures, and classes, so we need a replacement for the #include command.
A side note: C and C++ standard wonks will know that it is not required that the preprocessor and not the compiler handle "#include" lines. The standards dictate that after certain lines (such as #include "string") the compiler must exhibit certain behaviors. But this bit of arcane knowledge is not important to the general idea of elminating the preprocessor.
The preprocessor allows for conditional compilation. It allows for "#if/#else/#endif" blocks that can be conditionally compiled, based on what follows the "#if". Conditional compilation is extremely useful on software that has multiple targets, such as the Linux kernel (which targets many different processors).
The preprocessor also allows for macros and substitution of values. It accepts a "#define" line which can change any token into something else. This mechanism was used for the "max()" and "min()" functions.
All of that would be lost with the elimination of the preprocessor. As all of those features are used on many projects, they would all have to be replaced by some form of extension to the compiler. The compiler would have to read the included files, and would have to compile (or not compile) conditionally-marked code.
Such a change is possible, but not easy. It would probably break a lot of existing code -- perhaps all nontrivial C and C++ programs.
Which means that removing the preprocessor from C and C++ and replacing it with something else is a change to the language that makes C and C++ no longer C and C++. Removing the preprocessor changes the languages. They are no longer C and C++, but different languages, and deserving of different names.
So in once sense you can remove the preprocessor from C and C++, but in another sense you cannot.
Labels:
C language,
C preprocessor,
C++,
C++ standard,
package manager
Friday, September 28, 2018
Macbooks are not an incentive
I've seen a number of job postings that include the line "all employees use MacBooks".
I suppose that this is intended as an enticement. I suppose that a MacBook is considered a "perk", a benefit of working at the company. Apple equipment is considered "cool", for some reason.
I'm not sure why.
MacBooks in 2018 are decent computers, but I find that they are inferior to other computers, especially when it comes to development.
I've been using computers for quite some time, and programming for most of that time. I've used MacBooks and Chromebooks and modern PCs. I've used older PCs and even ancient PCs with IBM's Model M keyboard. I've worked on IBM's System/23 (which was the origin of the first IBM PC keyboard). I have even used model 33 ASR Teletype terminals, which are large mechanical beasts that print uppercase on roll paper and do a poor job of it. So I know what I like.
And I don't like Apple's MacBook and MacBook Pro computers. I dislike the keyboard; I want more travel in the keys. I dislike the touchpad in front of the keyboard; I prefer the small pointing stick embedded in Lenovo and some Dell laptop keyboards. I dislike Apple's displays, which are too bright and too reflective. I want "matte" finish displays which hide reflections from light sources such as windows and ceiling lights.
My main client provides a computer, one that I must use when working for them. The computer is a Dell laptop, with a high-gloss display and a keyboard that is a bit better than current Apple keyboards, but not by much. I supplement the PC with a matte-finish display and a Matias "Quiet Pro" keyboard. These make the configuration much more tolerable.
Just as I "fixed" the Dell laptop, I could "fix" a MacBook Pro with an additional keyboard and display. But once I do that, why bother with the MacBook? Why not use a Mac Mini, or for that matter any small-factor PC? The latter would probably offer just as much memory and disk, and more USB ports. And cost less. And run Linux.
It may be some time before companies realize that developers have strong opinions about the equipment that they use. I think that they will, and when they do, they will provide developers with choices for equipment -- including the "bring your own" option.
And it may be some time before developers realize that Apple MacBooks are not the best for development. Apple devices have a lot of glamour, but glamour doesn't get the job done -- at least not for me. Apple designs computers for visual appeal, and I need good ergonomic design.
I'm not going to forbid developers from using Apple products, or demand that everyone use the same equipment that I use. I will suggest that developers try different equipment, see which devices work for them, and understand the benefits of those devices. Pick your equipment for the right reasons, not because it has a pretty logo.
In the end, I find the phrase "all employees use MacBooks" to be a disincentive, a reason to avoid a particular gig. Because I would rather be productive than cool.
I suppose that this is intended as an enticement. I suppose that a MacBook is considered a "perk", a benefit of working at the company. Apple equipment is considered "cool", for some reason.
I'm not sure why.
MacBooks in 2018 are decent computers, but I find that they are inferior to other computers, especially when it comes to development.
I've been using computers for quite some time, and programming for most of that time. I've used MacBooks and Chromebooks and modern PCs. I've used older PCs and even ancient PCs with IBM's Model M keyboard. I've worked on IBM's System/23 (which was the origin of the first IBM PC keyboard). I have even used model 33 ASR Teletype terminals, which are large mechanical beasts that print uppercase on roll paper and do a poor job of it. So I know what I like.
And I don't like Apple's MacBook and MacBook Pro computers. I dislike the keyboard; I want more travel in the keys. I dislike the touchpad in front of the keyboard; I prefer the small pointing stick embedded in Lenovo and some Dell laptop keyboards. I dislike Apple's displays, which are too bright and too reflective. I want "matte" finish displays which hide reflections from light sources such as windows and ceiling lights.
My main client provides a computer, one that I must use when working for them. The computer is a Dell laptop, with a high-gloss display and a keyboard that is a bit better than current Apple keyboards, but not by much. I supplement the PC with a matte-finish display and a Matias "Quiet Pro" keyboard. These make the configuration much more tolerable.
Just as I "fixed" the Dell laptop, I could "fix" a MacBook Pro with an additional keyboard and display. But once I do that, why bother with the MacBook? Why not use a Mac Mini, or for that matter any small-factor PC? The latter would probably offer just as much memory and disk, and more USB ports. And cost less. And run Linux.
It may be some time before companies realize that developers have strong opinions about the equipment that they use. I think that they will, and when they do, they will provide developers with choices for equipment -- including the "bring your own" option.
And it may be some time before developers realize that Apple MacBooks are not the best for development. Apple devices have a lot of glamour, but glamour doesn't get the job done -- at least not for me. Apple designs computers for visual appeal, and I need good ergonomic design.
I'm not going to forbid developers from using Apple products, or demand that everyone use the same equipment that I use. I will suggest that developers try different equipment, see which devices work for them, and understand the benefits of those devices. Pick your equipment for the right reasons, not because it has a pretty logo.
In the end, I find the phrase "all employees use MacBooks" to be a disincentive, a reason to avoid a particular gig. Because I would rather be productive than cool.
Tuesday, September 18, 2018
Programming languages and the GUI
Programming languages and GUIs don't mix. Of all the languages available today, none are GUI-based languages.
My test for a GUI-based language is the requirement that any program written in the language must use a GUI. If you can write a program and run it in a text window, then the language is not a GUI-based language.
This is an extreme test, and perhaps unfair. But it shows an interesting point: We have no GUI-based languages.
We had programming before the GUI with various forms of input and output (punch cards, paper tape, magnetic tape, disk files, printers, and terminals). When GUIs came along, we rushed to create GUI programs but not GUI programming languages. (Except for Visual Basic.) We still have GUIs, some thirty years on, and today we have no GUI programming languages.
Almost all programming languages treat windows (or GUIs) as a second thought. Programming for the GUI is bolted on to the language as a library or framework; it is not part of the core language.
For some languages, the explanation is obvious: the language existed before GUIs existed (or became popular). Languages such as Cobol, Fortran, PL/I, Pascal, and C had been designed before GUIs appeared on the horizon. Cobol and Fortran were designed in an era of magnetic tapes, disk files, and printers. Pascal and C were created for printing terminals or "smart" CRT terminals such as DEC's VT-52.
Some languages were designed for a specific purpose. Such languages have no need of GUIs, and they don't have any GUI support. AWK was designed as a text processing language, a filter that fit in with Unix's tool-chain philosophy. SQL was designed for querying databases (and prior to GUIs).
Other languages were designed after the advent of the GUI, and for general-purpose programming. Languages such as Java, C#, Python, and Ruby came to life in the "graphical age", yet graphics is an extension of the language, not part of the core.
Microsoft extended C++ with its Visual C++ package. The early versions were a daunting mix of libraries, classes, and #define macros. Recent versions are more palatable, but C++ remains C++ and the GUI parts are mere add-ons to the language.
Borland extended Pascal, and later provided Delphi, for Windows programming. But Object Pascal and Windows Pascal and even Delphi were just Pascal with GUI programming bolted on to the core language.
The only language that put the GUI in the language was Visual Basic. (The old Visual Basic, not the VB.NET language of today.) These languages not only supported a graphical display, they required it.
I realize that there may be niche languages that handle graphics as part of the core language. Matlab and R support the generation of graphics to view data -- but they are hardly general-purpose languages. (One would not write a word processor in R.)
Mathematica and Wolfram do nice things with graphics too, but again, for rendering numerical data.
There are probably other obscure languages that handle GUI programming. But they are obscure, not mainstream. The only other (somewhat) popular language that required a graphical display was Logo, and that was hardly a general-purpose language.
The only popular language that handled the GUI as a first-class citizen was Visual Basic. It is interesting to note that Visual Basic has declined in popularity. Its successor, VB.NET is a rough translation of C# and the GUI is, like other languages, something added to the core language.
Of course, programming (and system design) today is very different from the past. We design and build for mobile devices and web services, with some occasional web applications. Desktop applications are considered passe, and console applications are not considered at all (except perhaps for system administrators).
Modern applications place the user interface on a mobile device. The server provides services, nothing more. The GUI has moved from the desktop PC to the web browser and now to the phone. Yet we have no equivalent of Visual Basic for developing phone apps. The tools are desktop languages with extensions for mobile devices.
When will we get a language tailored to phones? And who will build it?
My test for a GUI-based language is the requirement that any program written in the language must use a GUI. If you can write a program and run it in a text window, then the language is not a GUI-based language.
This is an extreme test, and perhaps unfair. But it shows an interesting point: We have no GUI-based languages.
We had programming before the GUI with various forms of input and output (punch cards, paper tape, magnetic tape, disk files, printers, and terminals). When GUIs came along, we rushed to create GUI programs but not GUI programming languages. (Except for Visual Basic.) We still have GUIs, some thirty years on, and today we have no GUI programming languages.
Almost all programming languages treat windows (or GUIs) as a second thought. Programming for the GUI is bolted on to the language as a library or framework; it is not part of the core language.
For some languages, the explanation is obvious: the language existed before GUIs existed (or became popular). Languages such as Cobol, Fortran, PL/I, Pascal, and C had been designed before GUIs appeared on the horizon. Cobol and Fortran were designed in an era of magnetic tapes, disk files, and printers. Pascal and C were created for printing terminals or "smart" CRT terminals such as DEC's VT-52.
Some languages were designed for a specific purpose. Such languages have no need of GUIs, and they don't have any GUI support. AWK was designed as a text processing language, a filter that fit in with Unix's tool-chain philosophy. SQL was designed for querying databases (and prior to GUIs).
Other languages were designed after the advent of the GUI, and for general-purpose programming. Languages such as Java, C#, Python, and Ruby came to life in the "graphical age", yet graphics is an extension of the language, not part of the core.
Microsoft extended C++ with its Visual C++ package. The early versions were a daunting mix of libraries, classes, and #define macros. Recent versions are more palatable, but C++ remains C++ and the GUI parts are mere add-ons to the language.
Borland extended Pascal, and later provided Delphi, for Windows programming. But Object Pascal and Windows Pascal and even Delphi were just Pascal with GUI programming bolted on to the core language.
The only language that put the GUI in the language was Visual Basic. (The old Visual Basic, not the VB.NET language of today.) These languages not only supported a graphical display, they required it.
I realize that there may be niche languages that handle graphics as part of the core language. Matlab and R support the generation of graphics to view data -- but they are hardly general-purpose languages. (One would not write a word processor in R.)
Mathematica and Wolfram do nice things with graphics too, but again, for rendering numerical data.
There are probably other obscure languages that handle GUI programming. But they are obscure, not mainstream. The only other (somewhat) popular language that required a graphical display was Logo, and that was hardly a general-purpose language.
The only popular language that handled the GUI as a first-class citizen was Visual Basic. It is interesting to note that Visual Basic has declined in popularity. Its successor, VB.NET is a rough translation of C# and the GUI is, like other languages, something added to the core language.
Of course, programming (and system design) today is very different from the past. We design and build for mobile devices and web services, with some occasional web applications. Desktop applications are considered passe, and console applications are not considered at all (except perhaps for system administrators).
Modern applications place the user interface on a mobile device. The server provides services, nothing more. The GUI has moved from the desktop PC to the web browser and now to the phone. Yet we have no equivalent of Visual Basic for developing phone apps. The tools are desktop languages with extensions for mobile devices.
When will we get a language tailored to phones? And who will build it?
Labels:
GUI,
mobile apps,
mobile devices,
programming languages
Wednesday, July 18, 2018
Another idea for Amazon's HQ2
Amazon has announced plans for an 'HQ2' a second head-quarters office. They have garnered attention in the press by asking cities to recommend locations (and provide benefits). The focus has been on which city will "win" the office ("winning" may be more expensive than cities realize) and on the criteria for Amazon's selection.
The question that no one has asked is: Why does Amazon want a second head-quarters?
Amazon projects growth over the next decade, and it will need employees in various capacities. But that does not require a second head-quarters. Amazon could easily expand with a number of smaller buildings, spread across the country. They could do it rather cheaply too, as there are a large number of available buildings in multiple cities. (Although the buildings may be in cities that are not where Amazon wants to locate its offices.)
Amazon also has the option to expand its workforce by using remote workers, letting employees work from home.
Why does Amazon want a single large building with so many employees? Why not simply buy (or lease) smaller buildings?
Maybe, just maybe, Amazon has another idea.
It is possible that Amazon is preparing to split into two companies. This would make some sense: Amazon has seen a lot of growth, and managing two smaller companies (with a holding company atop both) may have advantages.
The most likely split would be into one company for online and retail sales, and a second for its web and cloud services. These are two different operations, with different markets and different needs. Both operations are profitable, and Amazon does not need to subsidize one from the other.
Dividing into two companies gives some sense to a second head-quarters office. Once filled, Amazon could easily split into two companies, each with its own head-quarters. That could be Amazon's strategy.
I could, of course, be quite wrong about this. I have no relationship with Amazon, except as an occasional customer, so I have no privileged information.
But if they do split, I expect the effect on the market to be minimal. The two sides (retail and web services) are fairly independent.
The question that no one has asked is: Why does Amazon want a second head-quarters?
Amazon projects growth over the next decade, and it will need employees in various capacities. But that does not require a second head-quarters. Amazon could easily expand with a number of smaller buildings, spread across the country. They could do it rather cheaply too, as there are a large number of available buildings in multiple cities. (Although the buildings may be in cities that are not where Amazon wants to locate its offices.)
Amazon also has the option to expand its workforce by using remote workers, letting employees work from home.
Why does Amazon want a single large building with so many employees? Why not simply buy (or lease) smaller buildings?
Maybe, just maybe, Amazon has another idea.
It is possible that Amazon is preparing to split into two companies. This would make some sense: Amazon has seen a lot of growth, and managing two smaller companies (with a holding company atop both) may have advantages.
The most likely split would be into one company for online and retail sales, and a second for its web and cloud services. These are two different operations, with different markets and different needs. Both operations are profitable, and Amazon does not need to subsidize one from the other.
Dividing into two companies gives some sense to a second head-quarters office. Once filled, Amazon could easily split into two companies, each with its own head-quarters. That could be Amazon's strategy.
I could, of course, be quite wrong about this. I have no relationship with Amazon, except as an occasional customer, so I have no privileged information.
But if they do split, I expect the effect on the market to be minimal. The two sides (retail and web services) are fairly independent.
Thursday, June 7, 2018
Cable channels and web accounts
Web accounts are like cable channels, in that too many can be a bad thing.
When cable TV arrived in my home town (many moons ago), the cable company provided two things: a cable into the house, and a magical box with a sliding channel selector. The channels ranged from 2 through 13, and then from 'A' to 'Z'. We dutifully dialed our TV to channel 3 and then selected channels using the magical box.
At first, the world of cable TV was an unexplored wilderness for us, with lots of channels we did not have with our older, over-the-air reception. We carefully moved from one channel to another, absorbing the new programs available to us.
The novelty quickly wore off, and soon we would "surf" through the channels, viewing each one quickly and then deciding on a single program to watch. It was a comprehensive poll of the available programs, and it worked with only 38 channels.
In a later age of cable TV, when channels were more numerous, I realized that the "examine every channel" method would not work. While you can skip through 38 (or even 100) channels fairly quickly, you cannot do the same for 500 channels. It takes some amount of time to peek at a single channel, and that amount of time adds up for large numbers of channels. (The shift from analog to digital slowed the progress, because digital cable lags as the tuner resets to the new channel.)
In brief, once you have a certain number of channels, the "surf all and select" method takes too long, and you will have missed some amount of the program you want. (With enough channels, you may have missed the entire program!)
What does all of this have to do with web accounts?
Web accounts have a similar effect, not related to surfing for content (although I suppose that could be a problem for a viewed a large number of web sites) but with user names and passwords. Web accounts need maintenance. Not daily, or weekly, or monthly, but they do need maintenance from time to time. Web accounts break, or impose new rules for passwords, or require that you accept a new set of terms and conditions.
For me, it seems that at any given time, one account is broken. Not always the same account, and sometimes the number is zero and sometimes the number is two, but on average it seems that I am fixing an account at least once per week. This week I have two accounts that need work. One is my Apple account. The other is with yast.com, who billed my annual subscription to two different credit cards.
With a handful of accounts, this is not a problem. But with a large number of accounts... you can see where this is going. With more and more accounts, you must spend more and more time on "maintenance". Eventually, you run out of time.
Our current methods of authentication are not scalable. The notion of independent web sites, each with its own ID and password, each with its own authentication mechanism, works for a limited number of sites, but contains an upper bound. That upper bound varies from individual to individual, based on their ability to remember IDs and passwords, and their ability to fix problems.
We will need a different authentication mechanism (or set of mechanisms) that is more reliable and simpler to operate.
We have the beginnings of standardized authentication. Facebook and Google logins are available, and many sites use them. There is also OAuth, an open source offering, which may also standardize authentication. It is possible that the US government will provide an authentication service, perhaps through the Postal Service. I don't know which will become successful, but I believe that a standard authentication mechanism is coming.
When cable TV arrived in my home town (many moons ago), the cable company provided two things: a cable into the house, and a magical box with a sliding channel selector. The channels ranged from 2 through 13, and then from 'A' to 'Z'. We dutifully dialed our TV to channel 3 and then selected channels using the magical box.
At first, the world of cable TV was an unexplored wilderness for us, with lots of channels we did not have with our older, over-the-air reception. We carefully moved from one channel to another, absorbing the new programs available to us.
The novelty quickly wore off, and soon we would "surf" through the channels, viewing each one quickly and then deciding on a single program to watch. It was a comprehensive poll of the available programs, and it worked with only 38 channels.
In a later age of cable TV, when channels were more numerous, I realized that the "examine every channel" method would not work. While you can skip through 38 (or even 100) channels fairly quickly, you cannot do the same for 500 channels. It takes some amount of time to peek at a single channel, and that amount of time adds up for large numbers of channels. (The shift from analog to digital slowed the progress, because digital cable lags as the tuner resets to the new channel.)
In brief, once you have a certain number of channels, the "surf all and select" method takes too long, and you will have missed some amount of the program you want. (With enough channels, you may have missed the entire program!)
What does all of this have to do with web accounts?
Web accounts have a similar effect, not related to surfing for content (although I suppose that could be a problem for a viewed a large number of web sites) but with user names and passwords. Web accounts need maintenance. Not daily, or weekly, or monthly, but they do need maintenance from time to time. Web accounts break, or impose new rules for passwords, or require that you accept a new set of terms and conditions.
For me, it seems that at any given time, one account is broken. Not always the same account, and sometimes the number is zero and sometimes the number is two, but on average it seems that I am fixing an account at least once per week. This week I have two accounts that need work. One is my Apple account. The other is with yast.com, who billed my annual subscription to two different credit cards.
With a handful of accounts, this is not a problem. But with a large number of accounts... you can see where this is going. With more and more accounts, you must spend more and more time on "maintenance". Eventually, you run out of time.
Our current methods of authentication are not scalable. The notion of independent web sites, each with its own ID and password, each with its own authentication mechanism, works for a limited number of sites, but contains an upper bound. That upper bound varies from individual to individual, based on their ability to remember IDs and passwords, and their ability to fix problems.
We will need a different authentication mechanism (or set of mechanisms) that is more reliable and simpler to operate.
We have the beginnings of standardized authentication. Facebook and Google logins are available, and many sites use them. There is also OAuth, an open source offering, which may also standardize authentication. It is possible that the US government will provide an authentication service, perhaps through the Postal Service. I don't know which will become successful, but I believe that a standard authentication mechanism is coming.
Subscribe to:
Posts (Atom)