Showing posts with label operating systems. Show all posts
Showing posts with label operating systems. Show all posts

Thursday, June 16, 2022

Consolidation of processors, and more

We're in an age of consolidation. PCs are moving to the ARM processor as a standard. Apple has already replaced their entire line with ARM-based processors. Microsoft has built ARM-based laptops. The advantages of ARM (lower production cost, lower heat dissipation) make such a move worthwhile.

If this consolidation extends to all manufacturers, then we would see a uniform processor architecture, something that we have not seen in the PC era. While the IBM PC set a standard with the Intel 8088 processor, other computers at the time used other processors, mostly the Zilog Z-80 and the Mostek 6502. When Apple shifted to the Macintosh line, is changed to the Motorola 68000 processor.

Is consolidation limited to processors?

There are, today, four major operating systems: Windows, mac OS, zOS, and Linux. Could we see a similar consolidation among operating systems? Microsoft is adding Linux to Windows with WSL, which melds Linux into Windows. Apple's mac OS is based on Net BSD Unix, which is not that far from Linux. IBM's zOS supports Linux as virtual machines. IBM might, one day, replace zOS with Linux; they certainly have the ability to build one.

If both of these consolidations were to occur, then we would see a uniform processor architecture and a uniform operating system, something that has not occurred in the computing age.

(I'm not so dreamy-eyed that I believe this would happen. I expect Microsoft, Apple, and IBM to keep some degree of proprietary extensions to their systems. But let's dream a little.)

What affect would a uniform processor architecture and uniform operating system have on programming languages?

At first glance, one might think that there would be no effect. Programming languages are different things from processors and operating systems, handling different tasks. Different programming languages are good at different things, and we want to do different things, so why not keep different programming languages?

It is true that different programming languages are good at different things, but that doesn't mean that each and every programming language have unique strengths. Several programming languages have capabilities that overlap, some in multiple areas, and some almost completely. C# and VB.NET, for example. Or C# and Java, two object-oriented languages that are good for large-scale projects.

With a single processor architecture and a single operating system, Java loses one of its selling points. Java was designed to run on multiple platforms. It's motto was "Write Once, Run Everywhere." In the mid 1990s, such a goal made sense. There were different processors and different operating systems. But with a uniform architecture and uniform operating system, Java loses that point. The language remains a solid performer, so the loss is not fatal. But the argument for Java weakens.

A pair of overlapping languages is VB.NET and C#. Both are made by Microsoft, and both are made for Windows. Or were made for Windows; they are now available on multiple platforms. They overlap quite a bit. Do we need both? Anything one can do in C# one can also do in VB.NET, and the reverse is true. There is some evidence that Microsoft wants to drop VB.NET -- although there is also evidence that developers want to keep programming in VB.NET. That creates tension for Microsoft.

I suspect that specialty languages such as SQL and JavaScript will remain. SQL has embedded itself in databases, and JavaScript has embedded itself in web browsers.

What about other popular languages? What about COBOL, and FORTRAN, and Python, and R, and Delphi (which oddly still ranks high in the Tiobe index)?

I see no reason for any of them to go away. Each has a large base of existing code; converting those programs to another language would be a large effort with little benefit.

And I think that small, niche languages will remain. Programming languages such as AWK will remain because they are small, easy to use, good at what they do, and they can be maintained by a small team.

The bottom line is that the decision is not practical and logical, but emotional. We have multiple programming languages not because different languages are good at different things (although they are) but because we want multiple programming languages. Programmers become comfortable with programming languages; different programmers choose different programming languages.

Thursday, May 5, 2016

Where have all the operating systems gone?

We used to have lots of operating systems. Every hardware manufacturer built their own operating systems. Large manufacturers like IBM and DEC had multiple operating systems, introducing new ones with new hardware.

(It's been said that DEC became a computer company by accident. They really wanted to write operating systems, but they needed processors to run the them and compilers and editors to give them something to do, so they ended up building everything. It's a reasonable theory, given the number of operating systems they produced.)

In the 1970s CP/M was an attempt at an operating system for different hardware platforms. It wasn't the first; Unix was designed for multiple platforms prior. It wasn't the only one; the UCSD p-System used a virtual processor quite like the virtual machine in the Java JVM and ran on various hardware.

Today we also see lots of operating systems. Commonly used ones include Windows, Linux, Mac OS, iOS, Android, Chrome OS, and even watchOS. But are they really different?

Android and Chrome OS are really variants on Linux. Linux itself is a clone of Unix. Mac OS is derived from NetBSD which in turn is derived from the Berkeley System Distribution of Unix. iOS and watchOS are, according to Wikipedia, "Unix-like", and I assume that they are slim versions of NetBSD with added components.

Which means that our list of commonly-used operating systems becomes:

  • Windows
  • Unix

That's a rather small list. (I'm excluding the operating systems used for special purposes, such as embedded systems in automobiles or machinery or network routers.)

I'm not sure that this reduction in operating systems, this approach to a monoculture, is a good thing. Nor am I convinced that it is a bad thing. After all, a common operating system (or two commonly-used operating systems) means that lots of people know how they work. It means that software written for one variant can be easily ported to another variant.

I do feel some sadness at the loss of the variety of earlier years. The early days of microcomputers saw wide variations of operating systems, a kind of Cambrian explosion of ideas and implementations. Different vendors offered different ideas, in hardware and software. The industry had a different feel from today's world of uniform PCs and standard Windows installations. (The variances between versions of Windows, or even between the distros of Linux, and much smaller than the differences between a Data General minicomputer and a DEC minicomputer.)

Settling on a single operating system is a way of settling on a solution. We have a problem, and *this* operating system, *this* solution, is how we address it. We've settled on other standards: character sets, languages (C# and Java are not that different), storage devices, and keyboards. Once we pick a solution and make it a standard, we tend to not think about it. (Is anyone thinking of new keyboard layouts? New character sets?) Operating systems seem to be settling.


Thursday, April 2, 2015

Mobile operating systems break the illusion of control

The mobile operating systems iOS and Android are qualitatively different from previous operating systems like Windows, MacOS, and Linux. They break the illusion that an application has control; this illusion has been with us since the first operating systems. To understand the illusion, and how mobile operating systems are different, we must understand the origins of operating systems.

The dawn of computers saw the hardware emerge from electro-mechanical relays, but no software. The first computers were programmed with wires connecting components to other components. A computer was a custom-purpose device, designed and built to perform one calculation. Shortly after, the "programming wires" were isolated to removable boards, which allowed a "program" to be removed from the computer and stored for later use.

The first programs (in the sense we know them) were sequences of numerical values that could be loaded into a computers memory. They were a softer variant of the wired plug-board in the earlier computers. Building the sequence of numerical values was tedious; one had to understand not only the problem to be solved but also the processor instruction set. These sequences are now called "machine language".

Programmers, being what they are, developed programs to ease the chore of creating the sequences of machine language values. These programs were the first assemblers; they converted symbols into executable sequences. A programmer could work with the much easier to understand symbolic code and convert the symbols to a program when his changes were done.

Up to this point, the operation of the computer has been a simple one. Create a program, insert it into the computer, and let it run. The program instructed the computer, and the computer performed the calculations.

There were no operating systems. It was the computer and the program, alone together in the small universe of computing hardware. The program was in charge and the computer obeyed its instructions. (Blindly and literally, which meant that the programmer had to be precise and complete in his description of the operations. That aspect of programming remains with us today.)

The first operating systems were little more than loaders for programs. Programmers found that the task of loading an executable program was a chore, and programmers, being what they are, creating programs to ease that task. A loader could start with a collection of programs (usually stored in a deck of punch cards), load the first one, let it run, and then load and run the next program.

Of course, the loader was still in memory, and the loaded programs could not use that memory or overwrite the loader. If they did, the loader would be unable to continue. I imagine that the very earliest arrangement worked by an agreement: the loader would use a block of addresses and the loaded programs would use other memory but not the block dedicated to the loader. The running program was still "in charge" of the computer, but it had to honor the "loader agreement".

This notion of "being in charge" is important. It is a notion that has been maintained by operating systems -- up to mobile operating systems. More on that later.

Operating systems grew out of the concept of the loader. They became more powerful, allowing more sharing of the expensive computer. The assumed the following functions:

  • Allocation and protection of memory (you can use only what you are assigned)
  • Control of physical devices (you must request operations through the operating system)
  • Allocation of CPU (time slices)

These are the typical functions we associate with operating systems.

Over the years, we have extended operating systems and continued to use them. Yet in all of that time, from IBM's System/360 to DEC's VMS to Microsoft's Windows, the understanding is that our program (our application), once loaded, is "in control" until it exits. This is an illusion, as our application can do very little on its own. It must request all resources from the operating system, including memory. It must request all actions through the operating system, including the operating on devices (display a window on a screen, send text to a printer, save a file to disk).

This illusion persists, I believe due to the education of programmers (and system operators, and computer designers). Not merely through formal training but also through informal channels. Our phrases indicate this: "Microsoft Word runs and prints a document" or "Visual Studio builds the executable" or "IIS serves the HTML document". Our conversations reinforce the belief that the running computer is in control.

And here is where the mobile operating systems come in. Android and iOS have very clear roles for application programs, and those roles are subservient to the operating system. A program does not run in the usual sense, making requests of the operating system. Instead, the app (and I will use the term "app" to indicate that the program is different from an application) is activated by the operating system when needed. That activation sees the operating system instruct the app to perform a task and then return control to the operating system.

Mobile operating systems turn the paradigm of the "program in control" inside-out. Instead of the program making requests of the operating system, the operating system makes requests of the program. The operating system is in control, not the program.

This view is very different from our traditional view, yet it is an accurate one. Apps are not in control. Applications are not in control -- and have not been for many years.

Thursday, January 8, 2015

Hardwiring the operating system

I tend to think of computers as consisting of four conceptual parts: hardware, operating system, application programs, and my data.

I know that computers are complex objects, and each of these four components has lots of subcomponents. For example, the hardware is a collection of processor, memory, video card, hard drive, ports to external devices, and "glue" circuitry to connect everything. (And even that is omitting some details.)

These top-level divisions, while perhaps not detailed, are useful. They allow me to separate the concerns of a computer. I can think about my data without worrying about the operating system. I can consider application programs without bothering with hardware.

It wasn't always this way. Oh, it was for personal computers, even those from the pre-IBM PC days. Hardware like the Altair was sold as a computing box with no operating system or software. Gary Kildall at Digital Research created CP/M to run on the various hardware available and designed it to have a dedicates unit for interfacing with hardware. (That dedicated unit was the Basic Input-Output System, or 'BIOS'.)

It was the very early days of computers that saw a close relationship between hardware, software, and data. Very early computers had no operating systems (operating systems themselves designed to separate the application program from the hardware). Computers were specialized devices, tailored to the task.

IBM's System/360 is recognized as the first general computer: a single computer that could be programmed for different applications, and used within an organization for multiple purposes. That computer began us on the march to separate hardware and software.

The divisions are not simply for my benefit. Many folks who work to design computers, build applications, and provide technology services find these divisions useful.

The division of computers into these four components allows for any one of the components to be swapped out, or moved to another computer. I can carry my documents and spreadsheets (data) from my PC to another one in the office. (I may 'carry' them by sending them across a network, but you get the idea.)

I can replace a spreadsheet application with a different spreadsheet application. Perhaps I replace Excel 2010 with Excel 2013. Or maybe change from Excel to another PC-based spreadsheet. The new spreadsheet software may or may not read my old data, so the interchangeability is not perfect. But again, you get the idea.

More than half a century later, we are still separating computers into hardware, operating system, application programs, and data.

And that may be changing.

I have several computing devices. I have a few PCs, including one laptop I use for my development work and e-mail. I have a smart phone, the third I have owned. I have a bunch of tablets.

For my PCs, I have installed different operating systems and changed them over time. The one Windows PC started with Windows 7. I upgraded it to Windows 8 and it now runs Windows 8.1. My Linux PCs have all had different releases of Ubuntu, and I expect to update them with the next version of Ubuntu. Not only do I get major versions, but I receive minor updates frequently.

But the phones and tablets are different. The phones (an HTC and two Samsung phones) ran a single operating system since I took them out of the box. (I think one of them got an update.) On of my tablets is an old Viewsonic gTablet running Android 2.2. There is no update to a later version of Android -- unless I want to 'root' the tablet and install another variant of Android like Cyanogen.

PCs get new versions of operating systems (and updates to existing versions). Tablets and phones get updates for applications, but not for operating systems. At least nowhere near as frequently as PCs.

And I have never considered (seriously) changing the operating system on a phone or tablet.

Part of this change is due, I believe, to the change in administration. We who own PCs administer the PC and decide when to update software. But we who think we own phones and tablets do *not* administer the tablet. We do not decide when to update applications or operating systems. (Yes, there are options to disable or defer updates, in Android and iOS.)

It is the equipment supplier, or the cell service provider, who decides to update operating systems on these devices. And they have less incentive to update the operating system than we do. (I suspect updates to operating systems generate a lot of calls from customers, either because they are confused or the update broke some functionality.)

So I see the move to smart phones and tablets, and its corresponding shift of administration from user to provider, as a step in synchronizing hardware and operating system. And once hardware and operating system are synchronized, they are not two items but one. We may, in the future, see operating systems baked in to devices with no (or limited) ways to update them. Operating systems may be part of the device, burned into a ROM.

Monday, May 19, 2014

The shift to cloud is bigger than we think

We've been using operating systems for decades. While they have changed over the years, they have offered a consistent set of features: time-slicing of the processor, memory allocation and management, device control, file systems, and interrupt handling.

Our programs ran "under" (or "on top of") an operating system. Our programs were also fussy -- they would run on one operating system and only that operating system. (I'm ignoring the various emulators that have come and gone over time.)

The operating system was the "target", it was the "core", it was the sun around which our programs orbited.

So it is rather interesting that the shift to cloud computing is also a shift away from operating systems.

Not that cloud computing is doing away with operating systems. Cloud computing coordinates the activities of multiple, usually virtualized, systems, and those systems run operating systems. What changes in cloud computing is the programming target.

Instead of a single computer, a cloud system is composed of multiple systems: web servers, database servers, and message queues, typically. While those servers and queues must run on computers (with operating systems), we don't care about them. We don't insist that they run any specific operating system (or even use a specific processor). We care only that they provide the necessary services.

In cloud computing, the notion of "operating system" fades into the infrastructure.

As cloud programmers, we don't care if our web server is running Windows. Nor do we care if it is running Linux. (The system administrators do care, but I am taking a programmer-centric view.) We don't care which operating system manages our message queues.

The level of abstraction for programmers has moved from operating system to web services.

That is a significant change. It means that programmers can focus on a higher level of work.

Hardware-tuned programming languages like C and C++ will become less important. Not completely forgotten, but used only by the specialists. Languages such as Python, Ruby, and Java will be popular.

Operating systems will be less important. They will be ignored by the application level programmers. The system architects and sysadmins, who design and maintain the cloud infrastructure, will care a lot about operating systems. But they will be a minority.

The change to services is perhaps not surprising. We long ago shifted away from processor-specific code, burying they work in our compilers. COBOL and FORTRAN, the earliest languages, were designed to run on different processors. Microsoft insulated us from the Windows API with MFC and later the .NET framework. Java separated us from the processor with its virtual machine. Now we take the next step and bury the operating system inside of web services.

Operating systems won't go away. But they will become less visible, less important in conversations and strategic plans. They will be more of a commodity and less of a strategic advantage.

Tuesday, April 22, 2014

We no longer think about operating systems

Windows XP remains popular, despite its age, its limitations, and its lack of support from Microsoft.

The desire to keep Windows XP shows that users want stable, reliable operating systems that they can install and then ignore. Well, perhaps not ignore, but at least not think about.

Things were not always this way. Early in the age of Windows, corporations, individuals, hobbyists, and programmers all looked forward to new versions of Microsoft's operating system. Windows 3.1 was desired for its networking capabilities; Windows 95 for its user interface (in contrast to Windows 8); and Windows NT for its security. Windows 2000 brought all of those features together, and was eagerly adopted.

I think that the lesson of Windows 8 (and Windows Vista, and Windows 7) is this: We no longer care about the operating system.

In the old days, operating systems were important -- much more than today. Certain applications would run on only certain operating systems; pick the wrong operating system and you could not run your application. Not running your application meant that you could not get your work done, or deliver for your client.

Today, most applications run on most operating systems. Yes, most Microsoft products run only on Windows, but other products run on Windows, MacOS, and Linux. Moreover, web apps run in browsers, and most web apps run in the popular browsers (Firefox, Chrome, IE, and Safari) and care nothing about the operating system.

Applications are not tied so closely to operating systems as they were.

The mobile world has made operating systems commodities, with equivalent apps available on iOS and Android. In the mobile world, very few people care about the operating system.

With less dependence on the operating system, we tend to think of other things. We still think of performance -- although modern processors are fast enough for most tasks and cloud computing can provide computing power for large tasks.

Today we tend to think of portability (an app for my phone) and network connectivity (coverage by mobile service provider).

The operating system, for most people, is a means to an end but it is not the end. We think of it as we think of electricity, or of sidewalks: there and ready for us to use, but nothing distinguishing about them. They are becoming part of "the infrastructure", that part of our world that we use without thinking about it.

To be sure, there are some folks who do care about operating systems. The system designers, to start. And I'm sure that Microsoft's product teams care about the features in Windows (as do Apple's product designers care about features in MacOS and iOS). Hobbyists and tinkerers enjoy exploring new versions of operating systems. Support teams for large organizations, security analysts, and the "black hat" hackers who look for vulnerabilities -- they all care about operating systems.

But walk down the street and ask individuals at random, and most will answer that they don't care. Some may not even know which operating system are used by their devices!

We've moved on to other things.

Tuesday, November 13, 2012

Which slice of the market can you afford to ignore?

Things are not the same in computer-land. The nice, almost-uniform world of Windows on every desktop was a simple place in which to live. With one set of development tools, one could build an application that everyone could run.

Everyone except for those Apple folks, but they had less than five percent of the market, and one could afford to ignore them. In fact, the economics dictated that one did ignore them -- the cost of developing a Mac-specific version of the application was larger than the expected revenue.

Those were simple days.

Today is different. Instead of a single operating system we have several. And instead of a single dominant version of an operating system, we have multiple.

Windows still rules the desktop -- but in multiple versions. Windows 8 may be the "brand new thing", yet most people have either Windows 7 or Windows XP. And a few have Windows Vista!

Outside of Windows, Apple's OSX has a strong showing. (Linux has a minor presence, and can probably be safely ignored.)

The browser world is fragmented among Microsoft Internet Explorer, Google Chromium, Apple's Safari, and Mozilla's Firefox.

Apple has become powerful with the mobile phones and dominant with tablets. The iOS operating system has a major market share, and one cannot easily ignore it. But there are different versions of iOS. Which ones should be supported and which ones can be ignored?

Of course, Google's Android has no small market share either. And Android exists in multiple versions. (Although most Android users want free apps, so perhaps it is possible to ignore them.)

Don't forget the Kindle and Nook e-reader/tablets!

None of these markets are completely dominant. Yet none are small. You cannot build one app that runs on all of them. Yet building multiple apps is expensive. (Lots of tools to buy, lots of techniques to learn, and lots of tests to run!)

What to do?

My suggestions:

  • Include as many markets as possible
  • Keep the client part of your app small
  • Design your application with processing on the server, not the client

Multiple markets gives you more exposure. It also forces you to keep your application somewhat platform-agnostic, which means that you are not tied to a platform. (Being tied to a platform is okay until the platform sinks, in which case you sink with it.)

Keeping your application small forces your application to a minimal user interface and a detached back end.

Pushing the processing part of your app insulates you from changes to the client (or the GUI, in 1990-speak). It also reduces the development and testing efforts for your apps, by centralizing the processing.

This technique has no surprises, perhaps. But then, it also requires no magic.

After all, which market segment can you afford to ignore?

Thursday, September 13, 2012

The ultimate desktop OS

The phrase "ultimate desktop OS" is inspiring and attention-getting. While we might think that the "ultimate" desktop operating system is an unreachable dream, it is possible that it can exist, that it does exist, and that we have seen it.

That ultimate desktop operating system may be, in fact, Windows 7.

It is quite possible that Windows 7 is the peak of desktop operating systems. Its successor, Windows 8, is geared for tablets, not desktops. (And now you see why I have been carefully using the phrase "desktop operating system".)

Some might argue that it is not Windows 7 that is the "bestest" operating system for desktops, that the award for "best desktop operating system" should go to Windows XP, or perhaps Ubuntu Linux 10.04. These are worthy contenders for the title.

I won't quibble about the selection.

Instead, I will observe that desktop PCs have peaked, that they have reached their potential, and the future belongs to another device. (In my mind, that device is the tablet.)

Should you dispute this idea, let me ask you this: If you were to build a new app, something from scratch (not a re-hash of e-mail or word processing), would you build it for the desktop or for the tablet? I would build it for the tablet, and I think a majority of developers would agree.

And that is why I say that desktop operating systems have peaked. The future belongs to the tablet. (And the cloud, for back-end processing.)

If tablets are the future -- and I believe that they are -- then it really doesn't matter that Microsoft releases a new version of Windows for desktops. (Who gets excited when IBM releases a new version of MVS?) Yes, some folks will welcome the new version of Windows, but they will be a minority.

Instead of new versions of Windows, we will be looking for new versions of iOS and Android.