I've seen a number of job postings that include the line "all employees use MacBooks".
I suppose that this is intended as an enticement. I suppose that a MacBook is considered a "perk", a benefit of working at the company. Apple equipment is considered "cool", for some reason.
I'm not sure why.
MacBooks in 2018 are decent computers, but I find that they are inferior to other computers, especially when it comes to development.
I've been using computers for quite some time, and programming for most of that time. I've used MacBooks and Chromebooks and modern PCs. I've used older PCs and even ancient PCs with IBM's Model M keyboard. I've worked on IBM's System/23 (which was the origin of the first IBM PC keyboard). I have even used model 33 ASR Teletype terminals, which are large mechanical beasts that print uppercase on roll paper and do a poor job of it. So I know what I like.
And I don't like Apple's MacBook and MacBook Pro computers. I dislike the keyboard; I want more travel in the keys. I dislike the touchpad in front of the keyboard; I prefer the small pointing stick embedded in Lenovo and some Dell laptop keyboards. I dislike Apple's displays, which are too bright and too reflective. I want "matte" finish displays which hide reflections from light sources such as windows and ceiling lights.
My main client provides a computer, one that I must use when working for them. The computer is a Dell laptop, with a high-gloss display and a keyboard that is a bit better than current Apple keyboards, but not by much. I supplement the PC with a matte-finish display and a Matias "Quiet Pro" keyboard. These make the configuration much more tolerable.
Just as I "fixed" the Dell laptop, I could "fix" a MacBook Pro with an additional keyboard and display. But once I do that, why bother with the MacBook? Why not use a Mac Mini, or for that matter any small-factor PC? The latter would probably offer just as much memory and disk, and more USB ports. And cost less. And run Linux.
It may be some time before companies realize that developers have strong opinions about the equipment that they use. I think that they will, and when they do, they will provide developers with choices for equipment -- including the "bring your own" option.
And it may be some time before developers realize that Apple MacBooks are not the best for development. Apple devices have a lot of glamour, but glamour doesn't get the job done -- at least not for me. Apple designs computers for visual appeal, and I need good ergonomic design.
I'm not going to forbid developers from using Apple products, or demand that everyone use the same equipment that I use. I will suggest that developers try different equipment, see which devices work for them, and understand the benefits of those devices. Pick your equipment for the right reasons, not because it has a pretty logo.
In the end, I find the phrase "all employees use MacBooks" to be a disincentive, a reason to avoid a particular gig. Because I would rather be productive than cool.
Friday, September 28, 2018
Tuesday, September 18, 2018
Programming languages and the GUI
Programming languages and GUIs don't mix. Of all the languages available today, none are GUI-based languages.
My test for a GUI-based language is the requirement that any program written in the language must use a GUI. If you can write a program and run it in a text window, then the language is not a GUI-based language.
This is an extreme test, and perhaps unfair. But it shows an interesting point: We have no GUI-based languages.
We had programming before the GUI with various forms of input and output (punch cards, paper tape, magnetic tape, disk files, printers, and terminals). When GUIs came along, we rushed to create GUI programs but not GUI programming languages. (Except for Visual Basic.) We still have GUIs, some thirty years on, and today we have no GUI programming languages.
Almost all programming languages treat windows (or GUIs) as a second thought. Programming for the GUI is bolted on to the language as a library or framework; it is not part of the core language.
For some languages, the explanation is obvious: the language existed before GUIs existed (or became popular). Languages such as Cobol, Fortran, PL/I, Pascal, and C had been designed before GUIs appeared on the horizon. Cobol and Fortran were designed in an era of magnetic tapes, disk files, and printers. Pascal and C were created for printing terminals or "smart" CRT terminals such as DEC's VT-52.
Some languages were designed for a specific purpose. Such languages have no need of GUIs, and they don't have any GUI support. AWK was designed as a text processing language, a filter that fit in with Unix's tool-chain philosophy. SQL was designed for querying databases (and prior to GUIs).
Other languages were designed after the advent of the GUI, and for general-purpose programming. Languages such as Java, C#, Python, and Ruby came to life in the "graphical age", yet graphics is an extension of the language, not part of the core.
Microsoft extended C++ with its Visual C++ package. The early versions were a daunting mix of libraries, classes, and #define macros. Recent versions are more palatable, but C++ remains C++ and the GUI parts are mere add-ons to the language.
Borland extended Pascal, and later provided Delphi, for Windows programming. But Object Pascal and Windows Pascal and even Delphi were just Pascal with GUI programming bolted on to the core language.
The only language that put the GUI in the language was Visual Basic. (The old Visual Basic, not the VB.NET language of today.) These languages not only supported a graphical display, they required it.
I realize that there may be niche languages that handle graphics as part of the core language. Matlab and R support the generation of graphics to view data -- but they are hardly general-purpose languages. (One would not write a word processor in R.)
Mathematica and Wolfram do nice things with graphics too, but again, for rendering numerical data.
There are probably other obscure languages that handle GUI programming. But they are obscure, not mainstream. The only other (somewhat) popular language that required a graphical display was Logo, and that was hardly a general-purpose language.
The only popular language that handled the GUI as a first-class citizen was Visual Basic. It is interesting to note that Visual Basic has declined in popularity. Its successor, VB.NET is a rough translation of C# and the GUI is, like other languages, something added to the core language.
Of course, programming (and system design) today is very different from the past. We design and build for mobile devices and web services, with some occasional web applications. Desktop applications are considered passe, and console applications are not considered at all (except perhaps for system administrators).
Modern applications place the user interface on a mobile device. The server provides services, nothing more. The GUI has moved from the desktop PC to the web browser and now to the phone. Yet we have no equivalent of Visual Basic for developing phone apps. The tools are desktop languages with extensions for mobile devices.
When will we get a language tailored to phones? And who will build it?
My test for a GUI-based language is the requirement that any program written in the language must use a GUI. If you can write a program and run it in a text window, then the language is not a GUI-based language.
This is an extreme test, and perhaps unfair. But it shows an interesting point: We have no GUI-based languages.
We had programming before the GUI with various forms of input and output (punch cards, paper tape, magnetic tape, disk files, printers, and terminals). When GUIs came along, we rushed to create GUI programs but not GUI programming languages. (Except for Visual Basic.) We still have GUIs, some thirty years on, and today we have no GUI programming languages.
Almost all programming languages treat windows (or GUIs) as a second thought. Programming for the GUI is bolted on to the language as a library or framework; it is not part of the core language.
For some languages, the explanation is obvious: the language existed before GUIs existed (or became popular). Languages such as Cobol, Fortran, PL/I, Pascal, and C had been designed before GUIs appeared on the horizon. Cobol and Fortran were designed in an era of magnetic tapes, disk files, and printers. Pascal and C were created for printing terminals or "smart" CRT terminals such as DEC's VT-52.
Some languages were designed for a specific purpose. Such languages have no need of GUIs, and they don't have any GUI support. AWK was designed as a text processing language, a filter that fit in with Unix's tool-chain philosophy. SQL was designed for querying databases (and prior to GUIs).
Other languages were designed after the advent of the GUI, and for general-purpose programming. Languages such as Java, C#, Python, and Ruby came to life in the "graphical age", yet graphics is an extension of the language, not part of the core.
Microsoft extended C++ with its Visual C++ package. The early versions were a daunting mix of libraries, classes, and #define macros. Recent versions are more palatable, but C++ remains C++ and the GUI parts are mere add-ons to the language.
Borland extended Pascal, and later provided Delphi, for Windows programming. But Object Pascal and Windows Pascal and even Delphi were just Pascal with GUI programming bolted on to the core language.
The only language that put the GUI in the language was Visual Basic. (The old Visual Basic, not the VB.NET language of today.) These languages not only supported a graphical display, they required it.
I realize that there may be niche languages that handle graphics as part of the core language. Matlab and R support the generation of graphics to view data -- but they are hardly general-purpose languages. (One would not write a word processor in R.)
Mathematica and Wolfram do nice things with graphics too, but again, for rendering numerical data.
There are probably other obscure languages that handle GUI programming. But they are obscure, not mainstream. The only other (somewhat) popular language that required a graphical display was Logo, and that was hardly a general-purpose language.
The only popular language that handled the GUI as a first-class citizen was Visual Basic. It is interesting to note that Visual Basic has declined in popularity. Its successor, VB.NET is a rough translation of C# and the GUI is, like other languages, something added to the core language.
Of course, programming (and system design) today is very different from the past. We design and build for mobile devices and web services, with some occasional web applications. Desktop applications are considered passe, and console applications are not considered at all (except perhaps for system administrators).
Modern applications place the user interface on a mobile device. The server provides services, nothing more. The GUI has moved from the desktop PC to the web browser and now to the phone. Yet we have no equivalent of Visual Basic for developing phone apps. The tools are desktop languages with extensions for mobile devices.
When will we get a language tailored to phones? And who will build it?
Labels:
GUI,
mobile apps,
mobile devices,
programming languages
Wednesday, July 18, 2018
Another idea for Amazon's HQ2
Amazon has announced plans for an 'HQ2' a second head-quarters office. They have garnered attention in the press by asking cities to recommend locations (and provide benefits). The focus has been on which city will "win" the office ("winning" may be more expensive than cities realize) and on the criteria for Amazon's selection.
The question that no one has asked is: Why does Amazon want a second head-quarters?
Amazon projects growth over the next decade, and it will need employees in various capacities. But that does not require a second head-quarters. Amazon could easily expand with a number of smaller buildings, spread across the country. They could do it rather cheaply too, as there are a large number of available buildings in multiple cities. (Although the buildings may be in cities that are not where Amazon wants to locate its offices.)
Amazon also has the option to expand its workforce by using remote workers, letting employees work from home.
Why does Amazon want a single large building with so many employees? Why not simply buy (or lease) smaller buildings?
Maybe, just maybe, Amazon has another idea.
It is possible that Amazon is preparing to split into two companies. This would make some sense: Amazon has seen a lot of growth, and managing two smaller companies (with a holding company atop both) may have advantages.
The most likely split would be into one company for online and retail sales, and a second for its web and cloud services. These are two different operations, with different markets and different needs. Both operations are profitable, and Amazon does not need to subsidize one from the other.
Dividing into two companies gives some sense to a second head-quarters office. Once filled, Amazon could easily split into two companies, each with its own head-quarters. That could be Amazon's strategy.
I could, of course, be quite wrong about this. I have no relationship with Amazon, except as an occasional customer, so I have no privileged information.
But if they do split, I expect the effect on the market to be minimal. The two sides (retail and web services) are fairly independent.
The question that no one has asked is: Why does Amazon want a second head-quarters?
Amazon projects growth over the next decade, and it will need employees in various capacities. But that does not require a second head-quarters. Amazon could easily expand with a number of smaller buildings, spread across the country. They could do it rather cheaply too, as there are a large number of available buildings in multiple cities. (Although the buildings may be in cities that are not where Amazon wants to locate its offices.)
Amazon also has the option to expand its workforce by using remote workers, letting employees work from home.
Why does Amazon want a single large building with so many employees? Why not simply buy (or lease) smaller buildings?
Maybe, just maybe, Amazon has another idea.
It is possible that Amazon is preparing to split into two companies. This would make some sense: Amazon has seen a lot of growth, and managing two smaller companies (with a holding company atop both) may have advantages.
The most likely split would be into one company for online and retail sales, and a second for its web and cloud services. These are two different operations, with different markets and different needs. Both operations are profitable, and Amazon does not need to subsidize one from the other.
Dividing into two companies gives some sense to a second head-quarters office. Once filled, Amazon could easily split into two companies, each with its own head-quarters. That could be Amazon's strategy.
I could, of course, be quite wrong about this. I have no relationship with Amazon, except as an occasional customer, so I have no privileged information.
But if they do split, I expect the effect on the market to be minimal. The two sides (retail and web services) are fairly independent.
Thursday, June 7, 2018
Cable channels and web accounts
Web accounts are like cable channels, in that too many can be a bad thing.
When cable TV arrived in my home town (many moons ago), the cable company provided two things: a cable into the house, and a magical box with a sliding channel selector. The channels ranged from 2 through 13, and then from 'A' to 'Z'. We dutifully dialed our TV to channel 3 and then selected channels using the magical box.
At first, the world of cable TV was an unexplored wilderness for us, with lots of channels we did not have with our older, over-the-air reception. We carefully moved from one channel to another, absorbing the new programs available to us.
The novelty quickly wore off, and soon we would "surf" through the channels, viewing each one quickly and then deciding on a single program to watch. It was a comprehensive poll of the available programs, and it worked with only 38 channels.
In a later age of cable TV, when channels were more numerous, I realized that the "examine every channel" method would not work. While you can skip through 38 (or even 100) channels fairly quickly, you cannot do the same for 500 channels. It takes some amount of time to peek at a single channel, and that amount of time adds up for large numbers of channels. (The shift from analog to digital slowed the progress, because digital cable lags as the tuner resets to the new channel.)
In brief, once you have a certain number of channels, the "surf all and select" method takes too long, and you will have missed some amount of the program you want. (With enough channels, you may have missed the entire program!)
What does all of this have to do with web accounts?
Web accounts have a similar effect, not related to surfing for content (although I suppose that could be a problem for a viewed a large number of web sites) but with user names and passwords. Web accounts need maintenance. Not daily, or weekly, or monthly, but they do need maintenance from time to time. Web accounts break, or impose new rules for passwords, or require that you accept a new set of terms and conditions.
For me, it seems that at any given time, one account is broken. Not always the same account, and sometimes the number is zero and sometimes the number is two, but on average it seems that I am fixing an account at least once per week. This week I have two accounts that need work. One is my Apple account. The other is with yast.com, who billed my annual subscription to two different credit cards.
With a handful of accounts, this is not a problem. But with a large number of accounts... you can see where this is going. With more and more accounts, you must spend more and more time on "maintenance". Eventually, you run out of time.
Our current methods of authentication are not scalable. The notion of independent web sites, each with its own ID and password, each with its own authentication mechanism, works for a limited number of sites, but contains an upper bound. That upper bound varies from individual to individual, based on their ability to remember IDs and passwords, and their ability to fix problems.
We will need a different authentication mechanism (or set of mechanisms) that is more reliable and simpler to operate.
We have the beginnings of standardized authentication. Facebook and Google logins are available, and many sites use them. There is also OAuth, an open source offering, which may also standardize authentication. It is possible that the US government will provide an authentication service, perhaps through the Postal Service. I don't know which will become successful, but I believe that a standard authentication mechanism is coming.
When cable TV arrived in my home town (many moons ago), the cable company provided two things: a cable into the house, and a magical box with a sliding channel selector. The channels ranged from 2 through 13, and then from 'A' to 'Z'. We dutifully dialed our TV to channel 3 and then selected channels using the magical box.
At first, the world of cable TV was an unexplored wilderness for us, with lots of channels we did not have with our older, over-the-air reception. We carefully moved from one channel to another, absorbing the new programs available to us.
The novelty quickly wore off, and soon we would "surf" through the channels, viewing each one quickly and then deciding on a single program to watch. It was a comprehensive poll of the available programs, and it worked with only 38 channels.
In a later age of cable TV, when channels were more numerous, I realized that the "examine every channel" method would not work. While you can skip through 38 (or even 100) channels fairly quickly, you cannot do the same for 500 channels. It takes some amount of time to peek at a single channel, and that amount of time adds up for large numbers of channels. (The shift from analog to digital slowed the progress, because digital cable lags as the tuner resets to the new channel.)
In brief, once you have a certain number of channels, the "surf all and select" method takes too long, and you will have missed some amount of the program you want. (With enough channels, you may have missed the entire program!)
What does all of this have to do with web accounts?
Web accounts have a similar effect, not related to surfing for content (although I suppose that could be a problem for a viewed a large number of web sites) but with user names and passwords. Web accounts need maintenance. Not daily, or weekly, or monthly, but they do need maintenance from time to time. Web accounts break, or impose new rules for passwords, or require that you accept a new set of terms and conditions.
For me, it seems that at any given time, one account is broken. Not always the same account, and sometimes the number is zero and sometimes the number is two, but on average it seems that I am fixing an account at least once per week. This week I have two accounts that need work. One is my Apple account. The other is with yast.com, who billed my annual subscription to two different credit cards.
With a handful of accounts, this is not a problem. But with a large number of accounts... you can see where this is going. With more and more accounts, you must spend more and more time on "maintenance". Eventually, you run out of time.
Our current methods of authentication are not scalable. The notion of independent web sites, each with its own ID and password, each with its own authentication mechanism, works for a limited number of sites, but contains an upper bound. That upper bound varies from individual to individual, based on their ability to remember IDs and passwords, and their ability to fix problems.
We will need a different authentication mechanism (or set of mechanisms) that is more reliable and simpler to operate.
We have the beginnings of standardized authentication. Facebook and Google logins are available, and many sites use them. There is also OAuth, an open source offering, which may also standardize authentication. It is possible that the US government will provide an authentication service, perhaps through the Postal Service. I don't know which will become successful, but I believe that a standard authentication mechanism is coming.
Tuesday, May 8, 2018
Refactor when you need it
The development cycle for Agile and TDD is simple:
A working solution gives you a good understanding of the requirement, and its affect on the code. With that understanding, you can then improve the code, making it clear for other programmers. The test keeps your revised solutions correct -- if a cleanup change breaks a test, you have to fix the code.
But refactoring is not limited to after a change. You can refactor before a change.
Why would you do that? Why would you refactor before making any changes? After all, if your code is clean, it doesn't need to be refactored. It is already understandable and maintainable. So why refactor in advance?
It turns out that code is not always perfectly clean. Sometimes we stop refactoring early. Sometimes we think our refactoring is complete when it is not. Sometimes we have duplicate code, or poorly named functions, or overweight classes. And sometimes we are enlightened by a new requirement.
A new requirement can force us to look at the code from a different angle. We can see new patterns, or see opportunities for improvement that we failed to see earlier.
When that happens, we see new ways of organizing the code. Often, the new organization allows for an easy change to meet the requirement. We might refactor classes to hold data in a different arrangement (perhaps a dictionary instead of a list) or break large-ish blocks into smaller blocks.
In this situation, it is better to refactor the code before adding the new requirement. Instead of adding the new feature and refactoring, perform the operations in reverse sequence: refactor and then add the requirement. (Of course, you still test and you can still refactor at the end.) The full sequence is:
Agile has taught us is to change our processes when the changes are beneficial. Changing the Agile process is part of that. You can refactor before making changes. You should refactor before making changes, when the refactoring will help you.
- Define a new requirement
- Write a test for that requirement
- Run the test (and see that it fails)
- Change the code to make the test pass
- Run the test (and see that it passes)
- Refactor the code to make it clean
- Run the test again (and see that it still passes)
A working solution gives you a good understanding of the requirement, and its affect on the code. With that understanding, you can then improve the code, making it clear for other programmers. The test keeps your revised solutions correct -- if a cleanup change breaks a test, you have to fix the code.
But refactoring is not limited to after a change. You can refactor before a change.
Why would you do that? Why would you refactor before making any changes? After all, if your code is clean, it doesn't need to be refactored. It is already understandable and maintainable. So why refactor in advance?
It turns out that code is not always perfectly clean. Sometimes we stop refactoring early. Sometimes we think our refactoring is complete when it is not. Sometimes we have duplicate code, or poorly named functions, or overweight classes. And sometimes we are enlightened by a new requirement.
A new requirement can force us to look at the code from a different angle. We can see new patterns, or see opportunities for improvement that we failed to see earlier.
When that happens, we see new ways of organizing the code. Often, the new organization allows for an easy change to meet the requirement. We might refactor classes to hold data in a different arrangement (perhaps a dictionary instead of a list) or break large-ish blocks into smaller blocks.
In this situation, it is better to refactor the code before adding the new requirement. Instead of adding the new feature and refactoring, perform the operations in reverse sequence: refactor and then add the requirement. (Of course, you still test and you can still refactor at the end.) The full sequence is:
- Define a new requirement
- Write a test for that requirement
- Run the test (and see that it fails)
- Examine the code and identify improvements
- Refactor the code (without adding the new requirement)
- Run tests to verify that the code still works (skip the new test)
- Change the code to make the test pass
- Run the test (and see that it passes)
- Refactor the code to make it clean
- Run the test again (and see that it still passes)
Agile has taught us is to change our processes when the changes are beneficial. Changing the Agile process is part of that. You can refactor before making changes. You should refactor before making changes, when the refactoring will help you.
Saturday, April 21, 2018
Why the ACM is stuck in academe
The Association of Computing Machinery (ACM) is a professional organization. Its main web page claims it is "Advancing Computing as a Science & Profession". In 2000, it recognized that it was focussed exclusively on the academic world, and it also recognized that it had to expand. It has struggled with that expansion for the past two decades.
I recently found an example of its failure.
The flagship publication, "Communications of the ACM", is available on paper or on-line. (So far, so good.) It is available to all comers, with only some articles locked behind a paywall. (Also good.)
But the presentation is bland, almost stifling.
The Communications web site follows a standard, "C-clamp" layout with content and in the center and links and administrative items wrapped around it on the top, left, and bottom. An issue's table of contents has titles (links) with descriptions to the individual articles of the magazine. This is a reasonable arrangement.
Individual articles are presented with header and footer, but without the left-side links. They are not using the C-clamp layout. (Also good.)
The fonts and colors are appealing, and they conform to accessibility standards.
But the problem that shows how ACM fails to "get it" is with the comments. Their articles still have comments (which is good) but very few people comment. So few that many articles have no comments. How does ACM present an article with no comments? How do they convey this to the reader? With a single, mechanical phrase under the article text:
That's it. Simply the text "no entries found". It doesn't even have a header describing the section as a comments section. (There is a horizontal rule between the article and this phrase, so the reader has some inkling that "no entries found" is somewhat distinctive from the article. But nothing indicating that the phrase refers to comments.)
Immediately under the title at the top of the page there is a link to comments (labelled "Comments") which is a simple intrapage link to the empty, unlabelled comments section.
I find phrase "no entries found" somewhat embarrassing. In the year 2018, we have the technology to provide text such as "no comments found" or "no comments" or perhaps "be the first to comment on this article". Yet the ACM, the self-proclaimed organization that "delivers resources that advance computing as a science and a profession" cannot bring itself to use any of those phrases. Instead, it allows the underlying CMS driving its web site to bleed out to the user.
A darker thought is that the ACM cares little for comments. It knows that it has to have them, to satisfy some need for "user engagement", but it doesn't really want them. That philosophy is consistent with the academic mindset of "publish and cite", in which citations to earlier publications are valued, but comments from random readers are not.
Yet the rest of the world (that is, people outside of academe) care little for citations and references. They care about opinions and information (ad profits). Comments are an ongoing problem for web sites; few are informative and many are insulting, and many web sites have abandoned comments.
ACM hasn't disabled its comments, but it hasn't encouraged them either. It sits in the middle.
This is why the ACM struggles with its outreach to the non-academic world.
I recently found an example of its failure.
The flagship publication, "Communications of the ACM", is available on paper or on-line. (So far, so good.) It is available to all comers, with only some articles locked behind a paywall. (Also good.)
But the presentation is bland, almost stifling.
The Communications web site follows a standard, "C-clamp" layout with content and in the center and links and administrative items wrapped around it on the top, left, and bottom. An issue's table of contents has titles (links) with descriptions to the individual articles of the magazine. This is a reasonable arrangement.
Individual articles are presented with header and footer, but without the left-side links. They are not using the C-clamp layout. (Also good.)
The fonts and colors are appealing, and they conform to accessibility standards.
But the problem that shows how ACM fails to "get it" is with the comments. Their articles still have comments (which is good) but very few people comment. So few that many articles have no comments. How does ACM present an article with no comments? How do they convey this to the reader? With a single, mechanical phrase under the article text:
No entries found
That's it. Simply the text "no entries found". It doesn't even have a header describing the section as a comments section. (There is a horizontal rule between the article and this phrase, so the reader has some inkling that "no entries found" is somewhat distinctive from the article. But nothing indicating that the phrase refers to comments.)
Immediately under the title at the top of the page there is a link to comments (labelled "Comments") which is a simple intrapage link to the empty, unlabelled comments section.
I find phrase "no entries found" somewhat embarrassing. In the year 2018, we have the technology to provide text such as "no comments found" or "no comments" or perhaps "be the first to comment on this article". Yet the ACM, the self-proclaimed organization that "delivers resources that advance computing as a science and a profession" cannot bring itself to use any of those phrases. Instead, it allows the underlying CMS driving its web site to bleed out to the user.
A darker thought is that the ACM cares little for comments. It knows that it has to have them, to satisfy some need for "user engagement", but it doesn't really want them. That philosophy is consistent with the academic mindset of "publish and cite", in which citations to earlier publications are valued, but comments from random readers are not.
Yet the rest of the world (that is, people outside of academe) care little for citations and references. They care about opinions and information (ad profits). Comments are an ongoing problem for web sites; few are informative and many are insulting, and many web sites have abandoned comments.
ACM hasn't disabled its comments, but it hasn't encouraged them either. It sits in the middle.
This is why the ACM struggles with its outreach to the non-academic world.
Thursday, April 19, 2018
Why no language to replace SQL?
The history of programming is littered with programming languages. Some endure for ages (COBOL, C, Java) and some live briefly (Visual J++). We often develop new languages to replace existing ones (Perl, Python).
Yet one language has endured and has seen no replacements: SQL.
SQL, invented in the 1970s and popularized in the 1980s, has lived a good life with no apparent challengers.
It is an anomaly. Every language I can think of has a "challenger" language. FORTRAN was challenged by BASIC. BASIC was challenged by Pascal. C++ was challenged by Java; Java was challenged by C. Unix shell programming was challenged by AWK, which in turn was challenged by Perl, which in turn has been challenged by Python.
Yet there have been no (serious) challengers to SQL. Why not?
I can think of several reasons:
But perhaps there is a challenger to SQL: NoSQL.
In one sense, NoSQL is a replacement for SQL. But it is a replacement of more than the language -- it is a replacement of the notion of data structure. NoSQL "databases" store documents and photographs and other things, but they are rarely used to process transactions. NoSQL databases don't replace SQL databases, they complement them. (Some companies move existing data from SQL databases to NoSQL databases, but this is data that fits poorly in the relational structure. They move some of their data but not all of their data out of the SQL database. These companies are fixing a problem, not replacing the SQL language.)
NoSQL is a complement of SQL, not a replacement (and therefore not a true challenger). SQL handles part of our data storage and NoSQL handles a different part.
It seems that SQL will be with us for some time. It is tied to the notion of relational organization, which is a useful mechanism for storing and processing homogeneous data.
Yet one language has endured and has seen no replacements: SQL.
SQL, invented in the 1970s and popularized in the 1980s, has lived a good life with no apparent challengers.
It is an anomaly. Every language I can think of has a "challenger" language. FORTRAN was challenged by BASIC. BASIC was challenged by Pascal. C++ was challenged by Java; Java was challenged by C. Unix shell programming was challenged by AWK, which in turn was challenged by Perl, which in turn has been challenged by Python.
Yet there have been no (serious) challengers to SQL. Why not?
I can think of several reasons:
- Everyone loves SQL and no one wants to change it.
- Programmers think of SQL as a protocol (specialized for databases) and not a programming language. Therefore, they don't invent a new language to replace it.
- Programmers want to work on other things.
- The task is bigger than a programming language. Replacing SQL means designing the language, creating an interpreter (or compiler?), command-line tools (these are programmers, after all), bindings to other languages (Python, Ruby, and Perl at minimum), and data access routines. With all features of SQL, including triggers, access controls, transactions, and audit logs.
- SQL gets a lot of things right, and works.
But perhaps there is a challenger to SQL: NoSQL.
In one sense, NoSQL is a replacement for SQL. But it is a replacement of more than the language -- it is a replacement of the notion of data structure. NoSQL "databases" store documents and photographs and other things, but they are rarely used to process transactions. NoSQL databases don't replace SQL databases, they complement them. (Some companies move existing data from SQL databases to NoSQL databases, but this is data that fits poorly in the relational structure. They move some of their data but not all of their data out of the SQL database. These companies are fixing a problem, not replacing the SQL language.)
NoSQL is a complement of SQL, not a replacement (and therefore not a true challenger). SQL handles part of our data storage and NoSQL handles a different part.
It seems that SQL will be with us for some time. It is tied to the notion of relational organization, which is a useful mechanism for storing and processing homogeneous data.
Labels:
NoSQL,
programming languages,
relational databases,
SQL
Subscribe to:
Posts (Atom)