Tuesday, April 23, 2019
Full-stack developers and the split between development and system administration
But here is a question: Why was there a split in functions? Why did we have separate roles for developers and system administrators? Why didn't we have combined roles from the beginning?
Well, at the very beginning of the modern computing era, we did have a single role. But things became complicated, and specialization was profitable for the providers of computers. Let's go back in time.
We're going way back in time, back before the current cloud-based, container-driven age. Back before the "old school" web age. Before the age of networked (but not internet-connected) PCs, and even before the PC era. We're going further back, before minicomputers and before commercial mainframes such as the IBM System/360.
We're going back to the dawn of modern electronic computing. This was a time before the operating system, and individuals who wanted to use a computer had to write their own code (machine code, not a high-level language such as COBOL) and those programs managed memory and manipulated input-output devices such as card readers and line printers. A program had total control of the computer -- there was no multiprocessing -- and it ran until it finished. When one programmer was finished with the computer, a second programmer could use it.
In this age, the programmer was a "full stack" developer, handling memory allocation, data structures, input and output routines, business logic. There were no databases, no web servers, and no authentication protocols, but the programmer "did it all", including scheduling time on the computer with other programmers.
Once organizations developed programs that they found useful, especially programs that had to be run on a regular basis, they dedicated a person to the scheduling and running of those tasks. That person's job was to ensure that the important programs were run on the right day, at the right time, with the right resources (card decks and magnetic tapes).
Computer manufacturers provided people for those roles, and also provided training for client employees to learn the skills of the "system operator". There was a profit for the manufacturer -- and a cost to be avoided (or at least minimized) by the client. Hence, only a few people were given the training.
Of the five "waves" of computing technology (mainframe, minicomputers, personal computers, networked PCs, and web servers) most started with a brief period of "one person does it all" and then shifted to a model that divided labor among specialists. Mainframes specialized with programmers and system operators (and later, database administrators). Personal computers, by their very nature, had one person but later specialists for word processing, databases, and desktop publishing. Networked PCs saw specialization with enterprise administrators (such as Windows domain administrators) and programmers each learning different skills.
It was the first specialization of tasks, in the early mainframe era, that set the tone for later specializations.
Today, we're moving away from specialization. I suspect that the "full stack" engineer is desired by managers who have tired of the arguments between specialists. Companies don't want to hear sysadmins and programmers bickering about who is at fault when an error occurs; they want solutions. Forcing sysadmins and programmers to "wear the same hat" eliminates the arguments. (Or so managers hope.)
The specialization of tasks on the different computing platforms happened because it was more efficient. The different jobs required different skills, and it was easier (and cheaper) to train some individuals for some tasks and other individuals for other tasks, and manage the two groups.
Perhaps the relative costs have changed. Perhaps, with our current technology, it is more difficult (and more expensive) to manage groups of specialists, and it is cheaper to train full-stack developers. That may say more about management skills than it does about technical skills.
Wednesday, January 24, 2018
Cloud computing is repeating history
Something has changed in Azure, and I can no longer deploy a new version to the production servers. My code works; I can test it locally. Something in the deployment sequence fails. This is a test project, using the free level of Azure, which means no monthly costs but also means no support -- other than the community help pages.
There are a few glorious advances in IT, advances which stand out above the others. They include the PC revolution (which saw individuals purchasing and using computers), the GUI (which saw people untrained in computer science using computers), and the smartphone (which saw lots more people using computers for lots more sophisticated tasks).
The PC revolution was a big change. Prior to personal computers (whether they were IBM PCs, Apple IIs, or Commodore 64s), computers were large, expensive, and complicated; they were especially difficult to administer. Mainframes and even minicomputers were large and expensive; an individual could afford one if they were an enormously wealthy individual and had lots of time to read manuals and try different configurations to make the thing work.
The consumer PCs changed all of that. They were expensive, but within the range of the middle class. They required little or no administration effort. (The Commodore 64 was especially easy: plug it in, attach to a television, and turn it on.)
Apple made the consumer PC easier to use with the Macintosh. The graphical user interface (lifted from Xerox PARC's Alto, and later copied by Microsoft Windows) made many operations and concepts consistent. Configuration was buried, and sometimes options were reduced to "the way Apple wants you to do it".
It strikes me that cloud computing is in a "mainframe phase". It is large and complex, and while an individual can create a an account (even a free account), the complexity and time necessary to learn and use the platform is significant.
My issue with Microsoft Azure is precisely that. Something has changed and it behaves differently than it did in the past. (It's not my code, the change is in the deployment of my app.) I don't think that I have changed something in Azure's configuration -- although I could have.
The problem is that once you go beyond the 'three easy steps to deploy a web app', Azure is a vast and intimidating beast with lots of settings, each with new terminology. I could poke at various settings, but will that fix the problem or make things worse?
From my view, cloud computing is a large, complex system that requires lots of knowledge and expertise. In other words, it is much like a mainframe. (Except, of course, you don't need a large room dedicated to the equipment.)
The "starter plans" (often free) are not the equivalent of a PC. They are merely the same, enterprise-level plans with certain features turned off.
A PC is different from a mainframe reduced to tabletop size. Both have CPUs and memory and peripheral devices and operating systems, but are two different creatures. PCs have fewer options, fewer settings, fewer things you (the user) can get wrong.
Cloud computing is still at the "mainframe level" of options and settings. It's big and complicated, and it requires a lot of expertise to keep it running.
If we repeat history, we can expect companies to offer smaller, simpler versions of cloud computing. The advantage will be an easier learning curve and less required expertise; the disadvantage will be lower functionality. (Just as minicomputers were easier and less capable than mainframes and PCs were easier and less capable than minicomputers.)
I'll go out on a limb and predict that the companies who offer simpler cloud platforms will not be the current big providers (Amazon.com, Microsoft, Google). Mainframes were challenged by minicomputers from new vendors, not the existing leaders. PCs were initially constructed by hobbyists from kits. Soon after companies such as Radio Shack, Commodore, and the newcomer Apple offered fully-assembled, ready-to-run computers. IBM offered the PC after the success of these upstarts.
The driver for simpler cloud platforms will be cost -- direct and indirect, mostly indirect. The "cloud computing is a mainframe" analogy is not perfect, as the billed costs for cloud platforms can be inexpensive. The expense is not in the hardware, but the time to make the thing work. Current cloud platforms require expertise, and expertise that is not cheap. Companies are willing to pay for that expertise... for now.
I expect that we will see competition to the big cloud platforms, and the marketing will focus on ease of use and low Total Cost of Ownership (TCO). The newcomers will offer simpler clouds, sacrificing performance for reduced administration cost.
My project is currently stuck. Deployments fail, so I cannot update my app. Support is not really available, so I must rely on the limited web pages and perhaps trial and error. I may have to create a new app in Azure and copy my existing code to it. I'm not happy with the experience.
I'm also looking for a simpler cloud platform.
Wednesday, October 19, 2016
We prefer horizontal layers, not vertical stacks
Looking back at the 60-plus years of computer systems, we can see a pattern of design preferences. That pattern is an initial preference for vertical design (that is, a complete system from top to bottom) followed by a change to a horizontal divide between a platform and applications on that platform.
A few examples include mainframe computers, word processors, and smart phones.
Mainframe computers, in the early part of the mainframe age, were special-purpose machines. IBM changed the game with its System/360, which was a general-purpose computer. The S/360 could be used for commercial, scientific, or government organizations. It provided a common platform upon which ran application programs. The design was revolutionary, and it has stayed with us. Minicomputers followed the "platform and applications" pattern, as did microcomputers and later IBM's own Personal Computer.
When we think of the phrase "word processor", we think of software, most often Microsoft's "Word" application (which runs on the Windows platform). But word processors were not always purely software. The original word processors were smart typewriters, machines with enhanced capabilities. In the mid-1970s, a word processor was a small computer with a keyboard, display, processing unit, floppy disks for storage, a printer, and software to make it all go.
But word processors as hardware did not last long. We moved away from the all-in-one design. In its place we used the "application on platform" approach, using PCs as the hardware and a word processing application program.
More recently, smart phones have become the platform of choice for photography, music, and navigation. We have moved away from cameras (a complete set of hardware and software for taking pictures), moved away from MP3 players (a complete set of hardware and software for playing music), and moved away from navigation units (a complete set of hardware and software for providing directions). In their place we use smart phones.
(Yes, I know that some people still prefer discrete cameras, and some people still use discrete navigation systems. I myself still use an MP3 player. But the number of people who use discrete devices for these tasks is small.)
I tried thinking of single-use devices that are still popular, and none came to mind. (I also tried thinking of applications that ran on platforms that moved to single-use devices, and also failed.)
It seems we have a definite preference for the "application on platform" design.
What does this mean for the future? For smart phones, possibly not so much -- other than they will remain popular until a new platform arrives. For the "internet of things", it means that we will see a number of task-specific devices such as thermostats and door locks until an "internet of things" platform comes along, and then all of those task-specific devices will become obsolete (like the task-specific mainframes or word processor hardware).
For cloud systems, perhaps the cloud is the platform and the virtual servers are the applications. Rather than discrete web servers and database servers the cloud is the platform for web server and database server "applications" that will be containerized versions of the software. The "application on platform" pattern means that cloud and containers will endure for some time, and is a good choice for architecture.
Sunday, August 14, 2016
PC-DOS killed the variants of programming languages
Many languages have versions. C# has had different releases, as has Java. Perl is transitioning from version 5 (which had multiple sub-versions) to version 6 (which will most likely have multiple sub-versions). But that's not what I'm talking about.
Some years ago, languages had different dialects. There were multiple implementations with different features. COBOL and FORTRAN all had machine-specific versions. But BASIC had the most variants. For example:
- Most BASICs used the "OPEN" statement to open files, but HP BASIC and GE BASIC used the "FILES" statement which listed the names of all files used in the program. (An OPEN statement lists only one file, and a program may use multiple OPEN statements.)
- Most BASICs used parentheses to enclose variable subscripts, but some used square brackets.
- Some BASICS had "ON n GOTO" statements but some used "GOTO OF n" statements.
- Some BASICS allowed the apostrophe as a comment indicator; others did not.
- Some BASICS allowed for statement modifiers, such as "FOR" or "WHILE" at the end of a statement and others did not.
These are just some of the differences in the dialects of BASIC. There were others.
What interests me is not that BASIC had so many variants, but that languages since then have not. The last attempt at a dialect of a language was Microsoft's Visual J++ as a variant of Java. They were challenged in court by Sun, and no one has attempted a special version of a language since. Because of this, I place the demise of variants in the year 2000.
There are two factors that come to mind. One is standards, the other is open source.
BASIC was introduced to the industry in the 1960s. There was no standard for BASIC, except perhaps for the Dartmouth implementation, which was the first implementation. The expectation of standards has risen since then, with standards for C, C++, Java, C#, JavaScript, and many others. With clear standards, different implementations of languages would be fairly close.
The argument that open source prevented the creation of variants of languages makes some sense. After all, one does not need to create a new, special version of a language when the "real" language is available for free. Why invest effort into a custom implementation? And the timing of open source is coincidental with the demise of variants, with open source rising just as language variants disappeared.
But the explanation is different, I think. It was not standards (or standards committees) and it was not open source that killed variants of languages. It was the PC and Windows.
The IBM PC and PC-DOS saw the standardization and commoditization of hardware, and the separation of software from hardware.
In the 1960s and 1970s, mainframe vendors and minicomputer vendors competed for customer business. They sold hardware, operating systems, and software. They needed ways to distinguish their offerings, and BASIC was one way that they could do that.
Why BASIC? There were several reasons. It was a popular language. It was easily implemented. It had no official standard, so implementors could add whatever features they wanted. A hardware manufacturer could offer their own, special version of BASIC as a productivity tool. IBM continued this "tradition" with BASIC in the ROM of the IBM PC and an advanced BASIC with PC-DOS.
But PC compatibles did not offer BASIC, and didn't need to. When manufacturers figured out how to build compatible computers, the factors for selecting a PC compatible were compatibility and price, not a special version of BASIC. Software would be acquired separately from the hardware.
Mainframes and minicomputers were expensive systems, sold with operating systems and software. PCs were different creatures, sold with an operating system but not software.
It's an idea that holds today.
With software being sold (or distributed, as open source) separately from the hardware, there is no need to build variants. Commercial languages (C#, Java, Swift) are managed by the company, which has an incentive for standardization of the language. Open source languages (Perl, Python, Ruby) can be had "for free", so why build a special version -- especially when that special version will need constant effort to match the changes in the "original"? Standard-based languages (C, C++) offer certainty to customers, and variants on them offer little advantage.
The only language that has variants today seems to be SQL. That makes sense, as the SQL interpreter is bundled with the database. Creating a variant is a way of distinguishing a product from the competition.
I expect that the commercial languages will continue to evolve along consistent lines. Microsoft will enhance C#, but there will be only the Microsoft implementation (or at least, the only implementation of significance). Oracle will maintain Java. Apple will maintain Swift.
The open source languages will evolve too. But Perl, Python, and Ruby will continue to see single implementations.
SQL will continue be the outlier. It will continue to see variants, as different database vendors supply them. It will be interesting to see what happens with the various NoSQL databases.
Tuesday, October 21, 2014
Cloud systems are the new mainframe
Mainframe
Timeshare (on mainframes)
Minicomputers
Desktop computers (includes pre-PC microcomputers, workstations, and laptops)
Servers and networked desktops
Mobile devices (phones and tablets)
I was going to add 'cloud systems' to the list as a seventh period, but I got to thinking.
My six arbitrary periods of computing show definite trends. The first trend is size: computers became physically smaller in each successive period. Mainframe computers were (and are) large systems that occupy rooms. Minicomputers were the sizes of refrigerators. Desktop computers fit on (or under) a desk. Mobile devices are small enough to carry in a shirt pocket.
The next trend is cost. Each successive period has a lower cost than the previous one. Mainframes cost in the hundreds of thousands of dollars. Minicomputers in the tens of thousands. Desktop computers were typically under $3000 (although some did edge up near $10,000) and today are usually under $1000. Mobile device costs range from $50 to $500.
The third trend is administrative effort or "load". Mainframes needed a team of well-trained attendants. Minicomputers needed one knowledgeable person to act as "system operator" or "sysop". Desktop computers could be administered by a geeky person in the home, or for large offices a team of support persons (but less than one support person per PC). Mobile devices need... no one. (Well, technically they are administered by the tribal chieftains: Apple, Google, or Microsoft.)
Cloud systems defy these trends.
By "cloud systems", I mean the cloud services that are offered by Amazon.com, Microsoft, Google, and others. I am including all of the services: infrastructure as a service, platform as a service, software as a service, machine images, queue systems, compute engines, storage engines, web servers... the whole kaboodle.
Cloud systems are large and expensive. They also tend to be limited in number, perhaps because they are large and expensive. They also have a sizable team of attendants. Cloud systems are complex and a large team is needed to keep everything running.
Cloud systems are much like mainframe computers.
The cloud services that are offered by vendors are much like the timesharing services offered by mainframe owners. With timesharing, customers could buy just as much computing time as they needed. Sound familiar? It's the model used by cloud computing.
We have, with cloud computing, returned to the mainframe era. This period has many similarities with the mainframe period. Mainframes were large, expensive to own, complex, and expensive to operate. Cloud systems are the same. The early mainframe period saw a number of competitors: IBM, NCR, CDC, Burroughs, Honeywell, and Univac, to name a few. Today we see competition between Amazon.com, Microsoft, Google, and others (including IBM).
Perhaps my "periods of computing history" is not so much a linear list as a cycle. Perhaps we are about to go "around" again, starting with the mainframe (or cloud) stage of expensive systems and evolve forward. What can we expect?
The mainframe period can be divided into two subperiods: before the System/360 and after. Before the IBM System/360, there was competition between companies and different designs. After the IBM System/360, companies standardized on that architecture. The System/360 design is still visible in mainframes of today.
An equivalent action in cloud systems would be the standardization of a cloud architecture. Perhaps the Open Stack software, perhaps Microsoft's Azure. I do not know which it will be. The key is for companies to standardize on one architecture. If it is a proprietary architecture, then that architecture's vendor is elevated to the role of industry leader, as IBM was with the System/360 (and later System/370) mainframes.
While companies are busy modifying their systems to conform to the industry standard platform, innovators develop technologies that allow for smaller versions. In the 1960s and 1970s, vendors introduced minicomputers. These were smaller than mainframes, less expensive, and easier to operate. For cloud systems, the equivalent would be... smaller than mainframe clouds, less expensive, and easier to operate. They would be less sophisticated than mainframe clouds, but "mini clouds" would still be useful.
In the late 1970s, technology advances lead to the microcomputer which could be purchased and used by a single person. As with mainframe computers, there were a variety of competing standards. After IBM introduced the Personal Computer, businesses (and individuals) elevated it to industry standard. Equivalent events in cloud would mean the development of individual-sized cloud systems, small enough to be purchased by a single person.
The 1980s saw the rise of desktop computers. The 1990s saw the rise of networked computers, desktop and server. An equivalent for cloud would be connecting cloud systems to one another. Somehow I think this "inter-cloud connection" will occur earlier, perhaps in the "mini cloud" period. We already have the network hardware and protocols in place. Connecting cloud systems will probably require some high-level protocols, and maybe faster connections, but the work should be minimal.
I'm still thinking of adding "cloud systems" to my list of computing periods. But I'm pretty sure that it won't be the last entry.
Thursday, July 31, 2014
Not so special
First were the mainframes. Large, expensive computers ordered, constructed, delivered, and used as a single entity. Only governments and wealthy corporations could own (or lease) a computer. Once acquired, the device was a singleton: it was "the computer". It was special.
Minicomputers reduced the specialness of computers. Instead of a single computer, a company (or a university) could purchase several minicomputers. Computers were no longer single entities in the organization. Instead of "the computer" we had "the computer for accounting" or "the computer for the physics department".
The opposite of "special" is "commodity", and personal computers brought us into a world of commodity computers. A company could have hundreds (or thousands) of computers, all identical.
Yet some computers retained their specialness. E-mail servers were singletons -- and therefore special. Web servers were special. Database servers were special.
Cloud computing reduces specialness again. With cloud systems, we can create virtual systems on demand, from pre-stocked images. We can store an image of a web server and when needed, instantiate a copy and start using it. We have not a single web server but as many as we need. The same holds for database servers. (Of course, cloud systems are designed to use multiple web servers and multiple database servers.)
In the end, specialness goes away. Computers, all computers, become commodities. They are not special.
Monday, March 10, 2014
IBM makes... mainframes
In the 1940s and 1950s, computing devices were specific to the task. We didn't have general purpose computers; we had tabulators and sorters and various types of machines. The very early electronic calculators were little more than adding machines -- addition was their only operation. The later machines were computers, albeit specialized, usually for military or commercial needs. (Which made some sense, as only the government and large corporations could afford the machines.)
IBM's System/360 changed the game. It was a general purpose machine, suitable for use by government, military, or commercial organizations. IBM's System/370 was a step up with virtual memory, dual processors, and built-in floating point arithmetic.
But these were still large, expensive machines, and these large, expensive machines defined the term "mainframe". IBM was the "big company that makes big computers".
Reluctantly, IBM entered the minicomputer market to compete with companies like DEC and Data General.
Also reluctantly, IBM entered the PC market to compete with Apple, Radio Shack, and other companies that were making inroads into the corporate world.
But I think, in its heart, IBM remained a mainframe company.
Why do I think that? Because over the years IBM has adjusted its product line. Look at what they have stopped producing:
- Typewriters
- Photocopiers
- Disk drives
- Tape drives
- Minicomputers
- Microcomputers (PCs)
- Laptop computers
- Printers for PCs
And look at what they have kept in their product line:
- Mainframe computers
- Servers
- Cloud-based services
- Watson
The last item, Watson, is particularly telling. Watson is IBM's super-sized information storage and retrieval system. It is quite sophisticated and has appeared (successfully) on the "Jeopardy!" TV game show.
Watson is a product that IBM is marketing to large companies (and probably the government). They do not offer a "junior" version for smaller companies or university departments. They do not offer a "personal" version for individuals. IBM's Watson is today's equivalent of the System/360 computer: large, expensive, and made for wealthy clients.
So IBM has come full circle, from the System/360 to minicomputers to personal computers and back to Watson. Will they ever offer smaller versions of Watson? Perhaps, if other companies enter the market and force IBM to respond.
We PC revolutionaries wanted to change the world. We wanted to bring computing to the masses. And we wanted to destroy IBM (or at least take it down a peg or two). Well, we did change the world. We did bring computing to the masses. We did not destroy IBM, or its mainframes. IBM is still the "big company that makes big computers".
Tuesday, September 17, 2013
When programming, think like a computer
This was brought home to me while reading William Conley's "Computer Optimization Techniques" which discusses the solutions to Integer Programming problems and related problems. Many of these problems can be solved with brute-force calculations, evaluating every possible solution and identifying the most profitable (or least expensive).
The programs for these brute-force methods are short and simple. Even in FORTRAN, they run less than fifty lines. Their brevity is due to their simplicity. There is no clever coding, no attempt to optimize the algorithm. The programs take advantage of the computer's strength of fast computation.
Humans think very differently. They tire quickly of routine calculations. They can identify patterns and have insights into shortcuts for algorithms. They can take creative leaps to solutions. These are all survival skills, useful for dealing with an uncertain environment and capable predators. But they are quite difficult to encode into a computer program. So hard that it is often more efficient to use brute-force calculations without insights and creative leaps. The time spent making the program "smart" is larger than the time saved by the improved program.
Brute-force is not always the best method for calculations. Sometimes you need a smart program, because the number of computations is staggering. In those cases, it is better to invest the time in improvements. (To his credit, Conley shows techniques to reduce the computations, sometimes by increasing the complexity of the code.)
Computing efficiency (that is, "smart" programs) has been a concern since the first computing machines were made. Necessary at first, the need for efficiency drops over time. Mainframe computers became faster, which allowed for "sloppy" programs ("sloppy" meaning "anything less than maximum efficiency").
Minicomputers were slower than mainframes, significantly less expensive, and another step away from the need for optimized, "smart" programs. PCs were another step. Today, smart phones have more computing power than PCs of a few years ago, at a fraction of the price. Cloud computing, a separate branch in the evolution of computing, offers cheap, readily-available computing power.
I won't claim that computing power is (or will ever be) "too cheap to meter". But it is cheap, and it is plentiful. And with cheap and plentiful computing power, we can build programs that use simple methods.
When writing a computer program, think like a computer. Start with a simple algorithm, one that is not clever. Chances are, it will be good enough.
Wednesday, May 8, 2013
Computers are temples, but tablets are servants
In short, "real" computers are temples of worship, and mobile computers are servants.
Mainframe computers have long seen the metaphor of religion, with their attendants referenced as "high priests". Personal computers have not seen such comparisons, but I think the "temple" metaphor holds. (Or perhaps we should say "shrine".)
"Real" computers are non-mobile. They are fixed in place. When we use a computer, we go to the computer. The one exception is laptops, which we can consider to be a portable shrine.
Mobile computers, in contrast, come with us. We do not go to them; they are nearby and ready for our requests.
Tablets and smartphones are intimate. They come with us to the grocery store, the exercise club, and the library. Mainframes of course do not come with us anywhere, and personal computers stay at home. Laptops occasionally come with us, but only with significant effort. (Carry bag, laptop, power adapter and cable, extra VGA cable, VGA-DVI adapter, and goodness-know-what.)
It's nice to visit a temple, but it's nicer to have a ready and capable servant.