The history of computing can be described as a series of developments, alternating between computing platforms and programming languages. The predominant pattern is one in which hardware is advanced, and then programming languages. Occasionally, hardware and programming languages advance together, but that is less common. (Hardware and system software -- not programming languages -- do advance together.)
The early mainframe computers were single-purpose devices. In the early 21st century, we think of computers as general-purpose devices, handling financial transactions, personal communication, navigation, and games, because our computing devices perform all of those tasks. But in the early days of electronic computing, devices were not so flexible. Mainframe computers were designed for a single purpose; either commercial (financial) processing, or scientific computation. The distinction was visible through all aspects of the computer system, from the processor and representations for numeric values to input-output devices and the characters available.
Once we had those computers, for commercial and for scientific computation, we built languages. COBOL for commercial processing; FORTRAN for scientific processing.
And thus began the cycle of alternating developments: computing platforms and programming languages. The programming languages follow the platforms.
The next advance in hardware was the general-purpose mainframe. The IBM System/360 was designed for both types of computing, and it used COBOL and FORTRAN. But we also continued the cycle of "platform and then language" with the invention of a general-purpose programming language: PL/1.
PL/1 was the intended successor to COBOL and to FORTRAN. It improved the syntax of both languages and was supposed to replace them. It did not. But it was the language we invented after general-purpose hardware, and it fits in the general pattern of advances in platforms alternating with advances in languages.
The next advance was timesharing. This advance in hardware and in system software let people use computers interactively. It was a big change from the older style of scheduled jobs that ran on batches of data.
The language we invented for this platform? It was BASIC. BASIC was designed for interactive use, and also designed to avoid requests of system operators to load disks or tapes. A BASIC program could contain its code and its data, all in one. Such a thing was not possible in earlier languages.
The next advance was minicomputers. The minicomputer revolution (DEC's PDP-8, PDP-11 and other systems from other vendors) used BASIC (adopted from timesharing) and FORTRAN. Once again, a new platform initially used the languages from the previous platform.
We also invented languages for minicomputers. DEC invented FOCAL (a lightweight FORTRAN) and DIBOL (a lightweight COBOL). Neither replaced its corresponding "heavyweight" language, but invent them we did.
The PC revolution followed minicomputers. PCs were small computers that could be purchased and used by individuals. Initially, PCs used BASIC. It was a good choice: small enough to fit into the small computers, and simple enough that individuals could quickly understand it.
The PC revolution invented its own languages: CBASIC (a compiled form of BASIC), dBase (later named "xbase"), and most importantly, spreadsheets. While not a programming language, a spreadsheet is a form of programming. It organizes data and specifies calculations. I count it as a programming platform.
The next computing platform was GUI programming, made possible with both the Apple Macintosh and Microsoft Windows. These "operating environments" (as they were called) changed programming from text-oriented to graphics, and required more powerful hardware -- and software. But the Macintosh first used Pascal, and Windows used C, two languages that were already available.
Later, Microsoft invented Visual Basic and provided Visual C++ (a concoction of C++ and macros to handle the needs of GUI programming), which became the dominant languages of Windows. Apple switched from Pascal to Objective-C, which it enhanced for programming the Mac.
The web was another computing advance, bringing two distinct platforms: the server and the browser. At first, servers used Perl and C (or possibly C++); browsers were without a language and had to use plug-ins such as Flash. We quickly invented Java and (somewhat less quickly) adopted it for servers. We also invented JavaScript, and today all browsers provide JavaScript for web pages.
Mobile computing (phones and tablets) started with Objective-C (Apple) and Java (Android), two languages that were convenient for those devices. Apple later invented Swift, to fix problems with the syntax of Objective-C and to provide a better experience for its users. Google invented Go and made it available for Android development, but it has seen limited adoption.
Looking back, we can see a clear pattern. A new computing platform emerges. At first, it uses existing languages. Shortly after the arrival of the platform, we invent new languages for that platform. Sometimes these languages are adopted, sometimes not. Sometimes a language gains popularity much later than expected, as in the case of BASIC, invented for timesharing but used for minicomputers and PCs.
It is a consistent pattern.
Consistent that is, until we get to cloud computing.
Cloud computing is a new platform, much like the web was a new platform, and PCs were a new platform, and general-purpose mainframes were a new platform. And while each of those platforms saw the development of new languages to take advantage of new features, the cloud computing platform has seen... nothing.
Well, "nothing" is a bit harsh and not quite true.
True to the pattern, cloud computing uses existing languages. Cloud applications can be built in Java, JavaScript, Python, C#, C++, and probably Fortran and COBOL. (And there are probably cloud applications that use these languages.)
And we have invented Node.js, a framework in JavaScript that is useful for cloud computing.
But there is no native language for cloud computing. No language that has been designed specifically for cloud computing. (No language of which I am aware. Perhaps there is, lurking in the dark corners of the internet that I have yet to visit.)
Why no language for the cloud platform? I can think of a few reasons:
First, it may be that our current languages are suitable for the development of cloud applications. Languages such as Java and C# may have the overhead of object-oriented design, but that overhead is minimal with careful design. Languages such as Python and JavaScript are interpreted, but that may not be a problem with the scale of cloud processing. Maybe the pressure to design a new language is low.
Second, it may be that developers, managers, and anyone else connected with projects for cloud applications is too busy learning the platform. Cloud platforms (AWS, Azure, GCP, etc.) are complex beasts, and there is a lot to learn. It is possible that we are still learning about cloud platforms and not ready to develop a cloud-specific language.
Third, it may be too complex to develop a cloud-specific programming language. The complexity may reside in separating cloud operations from programming, and we need to understand the cloud before we can understand its limits and the boundaries for a programming language.
I suspect that we will eventually see one or more programming languages for cloud platforms. The new languages may come from the big cloud providers (Amazon, Microsoft, Google) or smaller providers (Dell, Oracle, IBM) or possibly even someone else. Programming languages from the big providers will be applicable for their respective platforms (of course). A programming language from an independent party may work across all cloud platforms -- or may work on only one or a few.
We will have to wait this one out. But keep yours eyes open. Programming languages designed for cloud applications will offer exciting advances for programming.
Showing posts with label history of computers. Show all posts
Showing posts with label history of computers. Show all posts
Wednesday, February 12, 2020
Wednesday, January 24, 2018
Cloud computing is repeating history
A note to readers: This post is a bit of a rant, driven by emotion. My 'code stat' project, hosted on Microsoft Azure's web app PaaS platform, has failed and I have yet to find a resolution.
Something has changed in Azure, and I can no longer deploy a new version to the production servers. My code works; I can test it locally. Something in the deployment sequence fails. This is a test project, using the free level of Azure, which means no monthly costs but also means no support -- other than the community help pages.
There are a few glorious advances in IT, advances which stand out above the others. They include the PC revolution (which saw individuals purchasing and using computers), the GUI (which saw people untrained in computer science using computers), and the smartphone (which saw lots more people using computers for lots more sophisticated tasks).
The PC revolution was a big change. Prior to personal computers (whether they were IBM PCs, Apple IIs, or Commodore 64s), computers were large, expensive, and complicated; they were especially difficult to administer. Mainframes and even minicomputers were large and expensive; an individual could afford one if they were an enormously wealthy individual and had lots of time to read manuals and try different configurations to make the thing work.
The consumer PCs changed all of that. They were expensive, but within the range of the middle class. They required little or no administration effort. (The Commodore 64 was especially easy: plug it in, attach to a television, and turn it on.)
Apple made the consumer PC easier to use with the Macintosh. The graphical user interface (lifted from Xerox PARC's Alto, and later copied by Microsoft Windows) made many operations and concepts consistent. Configuration was buried, and sometimes options were reduced to "the way Apple wants you to do it".
It strikes me that cloud computing is in a "mainframe phase". It is large and complex, and while an individual can create a an account (even a free account), the complexity and time necessary to learn and use the platform is significant.
My issue with Microsoft Azure is precisely that. Something has changed and it behaves differently than it did in the past. (It's not my code, the change is in the deployment of my app.) I don't think that I have changed something in Azure's configuration -- although I could have.
The problem is that once you go beyond the 'three easy steps to deploy a web app', Azure is a vast and intimidating beast with lots of settings, each with new terminology. I could poke at various settings, but will that fix the problem or make things worse?
From my view, cloud computing is a large, complex system that requires lots of knowledge and expertise. In other words, it is much like a mainframe. (Except, of course, you don't need a large room dedicated to the equipment.)
The "starter plans" (often free) are not the equivalent of a PC. They are merely the same, enterprise-level plans with certain features turned off.
A PC is different from a mainframe reduced to tabletop size. Both have CPUs and memory and peripheral devices and operating systems, but are two different creatures. PCs have fewer options, fewer settings, fewer things you (the user) can get wrong.
Cloud computing is still at the "mainframe level" of options and settings. It's big and complicated, and it requires a lot of expertise to keep it running.
If we repeat history, we can expect companies to offer smaller, simpler versions of cloud computing. The advantage will be an easier learning curve and less required expertise; the disadvantage will be lower functionality. (Just as minicomputers were easier and less capable than mainframes and PCs were easier and less capable than minicomputers.)
I'll go out on a limb and predict that the companies who offer simpler cloud platforms will not be the current big providers (Amazon.com, Microsoft, Google). Mainframes were challenged by minicomputers from new vendors, not the existing leaders. PCs were initially constructed by hobbyists from kits. Soon after companies such as Radio Shack, Commodore, and the newcomer Apple offered fully-assembled, ready-to-run computers. IBM offered the PC after the success of these upstarts.
The driver for simpler cloud platforms will be cost -- direct and indirect, mostly indirect. The "cloud computing is a mainframe" analogy is not perfect, as the billed costs for cloud platforms can be inexpensive. The expense is not in the hardware, but the time to make the thing work. Current cloud platforms require expertise, and expertise that is not cheap. Companies are willing to pay for that expertise... for now.
I expect that we will see competition to the big cloud platforms, and the marketing will focus on ease of use and low Total Cost of Ownership (TCO). The newcomers will offer simpler clouds, sacrificing performance for reduced administration cost.
My project is currently stuck. Deployments fail, so I cannot update my app. Support is not really available, so I must rely on the limited web pages and perhaps trial and error. I may have to create a new app in Azure and copy my existing code to it. I'm not happy with the experience.
I'm also looking for a simpler cloud platform.
Something has changed in Azure, and I can no longer deploy a new version to the production servers. My code works; I can test it locally. Something in the deployment sequence fails. This is a test project, using the free level of Azure, which means no monthly costs but also means no support -- other than the community help pages.
There are a few glorious advances in IT, advances which stand out above the others. They include the PC revolution (which saw individuals purchasing and using computers), the GUI (which saw people untrained in computer science using computers), and the smartphone (which saw lots more people using computers for lots more sophisticated tasks).
The PC revolution was a big change. Prior to personal computers (whether they were IBM PCs, Apple IIs, or Commodore 64s), computers were large, expensive, and complicated; they were especially difficult to administer. Mainframes and even minicomputers were large and expensive; an individual could afford one if they were an enormously wealthy individual and had lots of time to read manuals and try different configurations to make the thing work.
The consumer PCs changed all of that. They were expensive, but within the range of the middle class. They required little or no administration effort. (The Commodore 64 was especially easy: plug it in, attach to a television, and turn it on.)
Apple made the consumer PC easier to use with the Macintosh. The graphical user interface (lifted from Xerox PARC's Alto, and later copied by Microsoft Windows) made many operations and concepts consistent. Configuration was buried, and sometimes options were reduced to "the way Apple wants you to do it".
It strikes me that cloud computing is in a "mainframe phase". It is large and complex, and while an individual can create a an account (even a free account), the complexity and time necessary to learn and use the platform is significant.
My issue with Microsoft Azure is precisely that. Something has changed and it behaves differently than it did in the past. (It's not my code, the change is in the deployment of my app.) I don't think that I have changed something in Azure's configuration -- although I could have.
The problem is that once you go beyond the 'three easy steps to deploy a web app', Azure is a vast and intimidating beast with lots of settings, each with new terminology. I could poke at various settings, but will that fix the problem or make things worse?
From my view, cloud computing is a large, complex system that requires lots of knowledge and expertise. In other words, it is much like a mainframe. (Except, of course, you don't need a large room dedicated to the equipment.)
The "starter plans" (often free) are not the equivalent of a PC. They are merely the same, enterprise-level plans with certain features turned off.
A PC is different from a mainframe reduced to tabletop size. Both have CPUs and memory and peripheral devices and operating systems, but are two different creatures. PCs have fewer options, fewer settings, fewer things you (the user) can get wrong.
Cloud computing is still at the "mainframe level" of options and settings. It's big and complicated, and it requires a lot of expertise to keep it running.
If we repeat history, we can expect companies to offer smaller, simpler versions of cloud computing. The advantage will be an easier learning curve and less required expertise; the disadvantage will be lower functionality. (Just as minicomputers were easier and less capable than mainframes and PCs were easier and less capable than minicomputers.)
I'll go out on a limb and predict that the companies who offer simpler cloud platforms will not be the current big providers (Amazon.com, Microsoft, Google). Mainframes were challenged by minicomputers from new vendors, not the existing leaders. PCs were initially constructed by hobbyists from kits. Soon after companies such as Radio Shack, Commodore, and the newcomer Apple offered fully-assembled, ready-to-run computers. IBM offered the PC after the success of these upstarts.
The driver for simpler cloud platforms will be cost -- direct and indirect, mostly indirect. The "cloud computing is a mainframe" analogy is not perfect, as the billed costs for cloud platforms can be inexpensive. The expense is not in the hardware, but the time to make the thing work. Current cloud platforms require expertise, and expertise that is not cheap. Companies are willing to pay for that expertise... for now.
I expect that we will see competition to the big cloud platforms, and the marketing will focus on ease of use and low Total Cost of Ownership (TCO). The newcomers will offer simpler clouds, sacrificing performance for reduced administration cost.
My project is currently stuck. Deployments fail, so I cannot update my app. Support is not really available, so I must rely on the limited web pages and perhaps trial and error. I may have to create a new app in Azure and copy my existing code to it. I'm not happy with the experience.
I'm also looking for a simpler cloud platform.
Monday, February 16, 2015
Goodbye, printing
The ability to print has been part of computing for ages. It's been with us since the mainframe era, when it was necessary for developers (to get the results of their compile jobs) and businesspeople (to get the reports needed to run the business).
But printing is not part of the mobile/cloud era. Oh, one can go through various contortions to print from a tablet, but practically no one does. (If any of my Gentle Readers does print from a tablet or smartphone, you can consider yourself a rare bird.)
Printing was really sharing.
Printing served three purposes: to share information (as a report or a memo), to archive data, or to get a bigger picture (larger than a display terminal).
Technology has given us better means of sharing information. With the web and mobile, we can send an e-mail, we can post to Facebook or Twitter, we can publish on a blog, we can make files available on web sites... We no longer need to print our text on paper and distribute it.
Archiving was sharing with someone (perhaps ourselves) in the future. It was a means of storing and retrieving data. This, too, can be handled with newer technologies.
Getting the big picture was important in the days of "glass TTY" terminals, those text-only displays of 24 lines with 80 characters each. Printouts were helpful because they offered more text at one view. But now displays are large and can display more than the old printouts. (At least one page of a printout, which is what we really looked at.)
The one aspect of printed documents which remains is that of legal contracts. We rely on signatures, something that is handled easily with paper and not so easily with computers. Until we change to electronic signatures, we will need paper.
But as a core feature of computer systems, printing has a short life. Say goodbye!
But printing is not part of the mobile/cloud era. Oh, one can go through various contortions to print from a tablet, but practically no one does. (If any of my Gentle Readers does print from a tablet or smartphone, you can consider yourself a rare bird.)
Printing was really sharing.
Printing served three purposes: to share information (as a report or a memo), to archive data, or to get a bigger picture (larger than a display terminal).
Technology has given us better means of sharing information. With the web and mobile, we can send an e-mail, we can post to Facebook or Twitter, we can publish on a blog, we can make files available on web sites... We no longer need to print our text on paper and distribute it.
Archiving was sharing with someone (perhaps ourselves) in the future. It was a means of storing and retrieving data. This, too, can be handled with newer technologies.
Getting the big picture was important in the days of "glass TTY" terminals, those text-only displays of 24 lines with 80 characters each. Printouts were helpful because they offered more text at one view. But now displays are large and can display more than the old printouts. (At least one page of a printout, which is what we really looked at.)
The one aspect of printed documents which remains is that of legal contracts. We rely on signatures, something that is handled easily with paper and not so easily with computers. Until we change to electronic signatures, we will need paper.
But as a core feature of computer systems, printing has a short life. Say goodbye!
Thursday, January 8, 2015
Hardwiring the operating system
I tend to think of computers as consisting of four conceptual parts: hardware, operating system, application programs, and my data.
I know that computers are complex objects, and each of these four components has lots of subcomponents. For example, the hardware is a collection of processor, memory, video card, hard drive, ports to external devices, and "glue" circuitry to connect everything. (And even that is omitting some details.)
These top-level divisions, while perhaps not detailed, are useful. They allow me to separate the concerns of a computer. I can think about my data without worrying about the operating system. I can consider application programs without bothering with hardware.
It wasn't always this way. Oh, it was for personal computers, even those from the pre-IBM PC days. Hardware like the Altair was sold as a computing box with no operating system or software. Gary Kildall at Digital Research created CP/M to run on the various hardware available and designed it to have a dedicates unit for interfacing with hardware. (That dedicated unit was the Basic Input-Output System, or 'BIOS'.)
It was the very early days of computers that saw a close relationship between hardware, software, and data. Very early computers had no operating systems (operating systems themselves designed to separate the application program from the hardware). Computers were specialized devices, tailored to the task.
IBM's System/360 is recognized as the first general computer: a single computer that could be programmed for different applications, and used within an organization for multiple purposes. That computer began us on the march to separate hardware and software.
The divisions are not simply for my benefit. Many folks who work to design computers, build applications, and provide technology services find these divisions useful.
The division of computers into these four components allows for any one of the components to be swapped out, or moved to another computer. I can carry my documents and spreadsheets (data) from my PC to another one in the office. (I may 'carry' them by sending them across a network, but you get the idea.)
I can replace a spreadsheet application with a different spreadsheet application. Perhaps I replace Excel 2010 with Excel 2013. Or maybe change from Excel to another PC-based spreadsheet. The new spreadsheet software may or may not read my old data, so the interchangeability is not perfect. But again, you get the idea.
More than half a century later, we are still separating computers into hardware, operating system, application programs, and data.
And that may be changing.
I know that computers are complex objects, and each of these four components has lots of subcomponents. For example, the hardware is a collection of processor, memory, video card, hard drive, ports to external devices, and "glue" circuitry to connect everything. (And even that is omitting some details.)
These top-level divisions, while perhaps not detailed, are useful. They allow me to separate the concerns of a computer. I can think about my data without worrying about the operating system. I can consider application programs without bothering with hardware.
It wasn't always this way. Oh, it was for personal computers, even those from the pre-IBM PC days. Hardware like the Altair was sold as a computing box with no operating system or software. Gary Kildall at Digital Research created CP/M to run on the various hardware available and designed it to have a dedicates unit for interfacing with hardware. (That dedicated unit was the Basic Input-Output System, or 'BIOS'.)
It was the very early days of computers that saw a close relationship between hardware, software, and data. Very early computers had no operating systems (operating systems themselves designed to separate the application program from the hardware). Computers were specialized devices, tailored to the task.
IBM's System/360 is recognized as the first general computer: a single computer that could be programmed for different applications, and used within an organization for multiple purposes. That computer began us on the march to separate hardware and software.
The divisions are not simply for my benefit. Many folks who work to design computers, build applications, and provide technology services find these divisions useful.
The division of computers into these four components allows for any one of the components to be swapped out, or moved to another computer. I can carry my documents and spreadsheets (data) from my PC to another one in the office. (I may 'carry' them by sending them across a network, but you get the idea.)
I can replace a spreadsheet application with a different spreadsheet application. Perhaps I replace Excel 2010 with Excel 2013. Or maybe change from Excel to another PC-based spreadsheet. The new spreadsheet software may or may not read my old data, so the interchangeability is not perfect. But again, you get the idea.
More than half a century later, we are still separating computers into hardware, operating system, application programs, and data.
And that may be changing.
I have several computing devices. I have a few PCs, including one laptop I use for my development work and e-mail. I have a smart phone, the third I have owned. I have a bunch of tablets.
For my PCs, I have installed different operating systems and changed them over time. The one Windows PC started with Windows 7. I upgraded it to Windows 8 and it now runs Windows 8.1. My Linux PCs have all had different releases of Ubuntu, and I expect to update them with the next version of Ubuntu. Not only do I get major versions, but I receive minor updates frequently.
But the phones and tablets are different. The phones (an HTC and two Samsung phones) ran a single operating system since I took them out of the box. (I think one of them got an update.) On of my tablets is an old Viewsonic gTablet running Android 2.2. There is no update to a later version of Android -- unless I want to 'root' the tablet and install another variant of Android like Cyanogen.
PCs get new versions of operating systems (and updates to existing versions). Tablets and phones get updates for applications, but not for operating systems. At least nowhere near as frequently as PCs.
And I have never considered (seriously) changing the operating system on a phone or tablet.
Part of this change is due, I believe, to the change in administration. We who own PCs administer the PC and decide when to update software. But we who think we own phones and tablets do *not* administer the tablet. We do not decide when to update applications or operating systems. (Yes, there are options to disable or defer updates, in Android and iOS.)
It is the equipment supplier, or the cell service provider, who decides to update operating systems on these devices. And they have less incentive to update the operating system than we do. (I suspect updates to operating systems generate a lot of calls from customers, either because they are confused or the update broke some functionality.)
So I see the move to smart phones and tablets, and its corresponding shift of administration from user to provider, as a step in synchronizing hardware and operating system. And once hardware and operating system are synchronized, they are not two items but one. We may, in the future, see operating systems baked in to devices with no (or limited) ways to update them. Operating systems may be part of the device, burned into a ROM.
Tuesday, October 21, 2014
Cloud systems are the new mainframe
The history of computers can be divided (somewhat arbitrarily) into six periods. These are:
Mainframe
Timeshare (on mainframes)
Minicomputers
Desktop computers (includes pre-PC microcomputers, workstations, and laptops)
Servers and networked desktops
Mobile devices (phones and tablets)
I was going to add 'cloud systems' to the list as a seventh period, but I got to thinking.
My six arbitrary periods of computing show definite trends. The first trend is size: computers became physically smaller in each successive period. Mainframe computers were (and are) large systems that occupy rooms. Minicomputers were the sizes of refrigerators. Desktop computers fit on (or under) a desk. Mobile devices are small enough to carry in a shirt pocket.
The next trend is cost. Each successive period has a lower cost than the previous one. Mainframes cost in the hundreds of thousands of dollars. Minicomputers in the tens of thousands. Desktop computers were typically under $3000 (although some did edge up near $10,000) and today are usually under $1000. Mobile device costs range from $50 to $500.
The third trend is administrative effort or "load". Mainframes needed a team of well-trained attendants. Minicomputers needed one knowledgeable person to act as "system operator" or "sysop". Desktop computers could be administered by a geeky person in the home, or for large offices a team of support persons (but less than one support person per PC). Mobile devices need... no one. (Well, technically they are administered by the tribal chieftains: Apple, Google, or Microsoft.)
Cloud systems defy these trends.
By "cloud systems", I mean the cloud services that are offered by Amazon.com, Microsoft, Google, and others. I am including all of the services: infrastructure as a service, platform as a service, software as a service, machine images, queue systems, compute engines, storage engines, web servers... the whole kaboodle.
Cloud systems are large and expensive. They also tend to be limited in number, perhaps because they are large and expensive. They also have a sizable team of attendants. Cloud systems are complex and a large team is needed to keep everything running.
Cloud systems are much like mainframe computers.
The cloud services that are offered by vendors are much like the timesharing services offered by mainframe owners. With timesharing, customers could buy just as much computing time as they needed. Sound familiar? It's the model used by cloud computing.
We have, with cloud computing, returned to the mainframe era. This period has many similarities with the mainframe period. Mainframes were large, expensive to own, complex, and expensive to operate. Cloud systems are the same. The early mainframe period saw a number of competitors: IBM, NCR, CDC, Burroughs, Honeywell, and Univac, to name a few. Today we see competition between Amazon.com, Microsoft, Google, and others (including IBM).
Perhaps my "periods of computing history" is not so much a linear list as a cycle. Perhaps we are about to go "around" again, starting with the mainframe (or cloud) stage of expensive systems and evolve forward. What can we expect?
The mainframe period can be divided into two subperiods: before the System/360 and after. Before the IBM System/360, there was competition between companies and different designs. After the IBM System/360, companies standardized on that architecture. The System/360 design is still visible in mainframes of today.
An equivalent action in cloud systems would be the standardization of a cloud architecture. Perhaps the Open Stack software, perhaps Microsoft's Azure. I do not know which it will be. The key is for companies to standardize on one architecture. If it is a proprietary architecture, then that architecture's vendor is elevated to the role of industry leader, as IBM was with the System/360 (and later System/370) mainframes.
While companies are busy modifying their systems to conform to the industry standard platform, innovators develop technologies that allow for smaller versions. In the 1960s and 1970s, vendors introduced minicomputers. These were smaller than mainframes, less expensive, and easier to operate. For cloud systems, the equivalent would be... smaller than mainframe clouds, less expensive, and easier to operate. They would be less sophisticated than mainframe clouds, but "mini clouds" would still be useful.
In the late 1970s, technology advances lead to the microcomputer which could be purchased and used by a single person. As with mainframe computers, there were a variety of competing standards. After IBM introduced the Personal Computer, businesses (and individuals) elevated it to industry standard. Equivalent events in cloud would mean the development of individual-sized cloud systems, small enough to be purchased by a single person.
The 1980s saw the rise of desktop computers. The 1990s saw the rise of networked computers, desktop and server. An equivalent for cloud would be connecting cloud systems to one another. Somehow I think this "inter-cloud connection" will occur earlier, perhaps in the "mini cloud" period. We already have the network hardware and protocols in place. Connecting cloud systems will probably require some high-level protocols, and maybe faster connections, but the work should be minimal.
I'm still thinking of adding "cloud systems" to my list of computing periods. But I'm pretty sure that it won't be the last entry.
Mainframe
Timeshare (on mainframes)
Minicomputers
Desktop computers (includes pre-PC microcomputers, workstations, and laptops)
Servers and networked desktops
Mobile devices (phones and tablets)
I was going to add 'cloud systems' to the list as a seventh period, but I got to thinking.
My six arbitrary periods of computing show definite trends. The first trend is size: computers became physically smaller in each successive period. Mainframe computers were (and are) large systems that occupy rooms. Minicomputers were the sizes of refrigerators. Desktop computers fit on (or under) a desk. Mobile devices are small enough to carry in a shirt pocket.
The next trend is cost. Each successive period has a lower cost than the previous one. Mainframes cost in the hundreds of thousands of dollars. Minicomputers in the tens of thousands. Desktop computers were typically under $3000 (although some did edge up near $10,000) and today are usually under $1000. Mobile device costs range from $50 to $500.
The third trend is administrative effort or "load". Mainframes needed a team of well-trained attendants. Minicomputers needed one knowledgeable person to act as "system operator" or "sysop". Desktop computers could be administered by a geeky person in the home, or for large offices a team of support persons (but less than one support person per PC). Mobile devices need... no one. (Well, technically they are administered by the tribal chieftains: Apple, Google, or Microsoft.)
Cloud systems defy these trends.
By "cloud systems", I mean the cloud services that are offered by Amazon.com, Microsoft, Google, and others. I am including all of the services: infrastructure as a service, platform as a service, software as a service, machine images, queue systems, compute engines, storage engines, web servers... the whole kaboodle.
Cloud systems are large and expensive. They also tend to be limited in number, perhaps because they are large and expensive. They also have a sizable team of attendants. Cloud systems are complex and a large team is needed to keep everything running.
Cloud systems are much like mainframe computers.
The cloud services that are offered by vendors are much like the timesharing services offered by mainframe owners. With timesharing, customers could buy just as much computing time as they needed. Sound familiar? It's the model used by cloud computing.
We have, with cloud computing, returned to the mainframe era. This period has many similarities with the mainframe period. Mainframes were large, expensive to own, complex, and expensive to operate. Cloud systems are the same. The early mainframe period saw a number of competitors: IBM, NCR, CDC, Burroughs, Honeywell, and Univac, to name a few. Today we see competition between Amazon.com, Microsoft, Google, and others (including IBM).
Perhaps my "periods of computing history" is not so much a linear list as a cycle. Perhaps we are about to go "around" again, starting with the mainframe (or cloud) stage of expensive systems and evolve forward. What can we expect?
The mainframe period can be divided into two subperiods: before the System/360 and after. Before the IBM System/360, there was competition between companies and different designs. After the IBM System/360, companies standardized on that architecture. The System/360 design is still visible in mainframes of today.
An equivalent action in cloud systems would be the standardization of a cloud architecture. Perhaps the Open Stack software, perhaps Microsoft's Azure. I do not know which it will be. The key is for companies to standardize on one architecture. If it is a proprietary architecture, then that architecture's vendor is elevated to the role of industry leader, as IBM was with the System/360 (and later System/370) mainframes.
While companies are busy modifying their systems to conform to the industry standard platform, innovators develop technologies that allow for smaller versions. In the 1960s and 1970s, vendors introduced minicomputers. These were smaller than mainframes, less expensive, and easier to operate. For cloud systems, the equivalent would be... smaller than mainframe clouds, less expensive, and easier to operate. They would be less sophisticated than mainframe clouds, but "mini clouds" would still be useful.
In the late 1970s, technology advances lead to the microcomputer which could be purchased and used by a single person. As with mainframe computers, there were a variety of competing standards. After IBM introduced the Personal Computer, businesses (and individuals) elevated it to industry standard. Equivalent events in cloud would mean the development of individual-sized cloud systems, small enough to be purchased by a single person.
The 1980s saw the rise of desktop computers. The 1990s saw the rise of networked computers, desktop and server. An equivalent for cloud would be connecting cloud systems to one another. Somehow I think this "inter-cloud connection" will occur earlier, perhaps in the "mini cloud" period. We already have the network hardware and protocols in place. Connecting cloud systems will probably require some high-level protocols, and maybe faster connections, but the work should be minimal.
I'm still thinking of adding "cloud systems" to my list of computing periods. But I'm pretty sure that it won't be the last entry.
Thursday, July 3, 2014
Bring back "minicomputer"
The term "minicomputer" is making a comeback.
Late last year, I attended a technical presentation in which the speaker referred to his smart phone as a "minicomputer".
This month, I read a magazine website that used the term minicomputer, referring to an ARM device for testing Android version L.
Neither of these devices is a minicomputer.
The term "minicomputer" was coined in the mainframe era, when all computers (well, all electronic computers) were large, required special rooms with dedicated air conditioning, and were attended by a team of operators and field engineers. Minicomputers were smaller, being about the size of a refrigerator and needing only one or two people to care for them. Revolutionary at the time, minicomputers allowed corporate and college departments set up their own computing environments.
I suspect that the term "mainframe" came into existence only after minicomputers obtained a noticeable presence.
In the late 1970s, the term "microcomputer" was used to describe the early personal computers (the Altair 8800, the IMSAI 8080, the Radio Shack TRS-80). But back to minicomputers.
For me and many others, the term "minicomputer" will always represent the department-sized computers made by Digital Equipment Corporation or Data General. But am I being selfish? Do I have the right to lock the term "minicomputer" to that definition?
Upon consideration, the idea of re-introducing the term "minicomputer" may be reasonable. We don't use the term today. Computers are either mainframes (that term is still in use), servers, desktops, laptops, tablets, phones, phablets, and ... whatever the open-board Arduino and Raspberry Pi devices are called. So the term "minicomputer" has been, in a sense, abandoned. As an abandoned term, it can be re-purposed.
But what devices should be tagged as minicomputers? The root "mini" implies small, as it does in "minimum" or "minimize". A "minicomputer" should therefore be "smaller than a (typical) computer".
What is a typical computer? In the 1960s, they were the large mainframes. And while mainframes exist today, one can hardly argue that they are typical: laptops, tablets, and phones are all outselling them. Embedded systems, existing in cars, microwave ovens, and cameras, are probably the most common form of computing device, but I consider them out of the running. First, they are already small and a smaller computer would be small indeed. Second, most people use those devices without thinking about the computer inside. They use a car, not a "car equipped with onboard computers".
So a minicomputer is something smaller that a desktop PC, a laptop PC, a tablet, or a smartphone.
I'm leaning towards the bare-board computers: the Arduino, the BeagleBone, the Raspberry Pi, and their brethren. These are all small computers in the physical sense, smaller than desktop and laptops. They are also small in power; typically they have low-end processors and limited memory and storage, so they are "smaller" (that is, less capable) that a smartphone.
The open-board computers (excuse me, minicomputers) are also a very small portion of the market, just as their refrigerator-sized namesakes.
Let's go have some fun with minicomputers!
Subscribe to:
Posts (Atom)