Monday, October 30, 2023
Apple M3
Tuesday, May 2, 2023
The long life of the hard disk
It was in 1983 that IBM introduced the IBM XT -- the IBM PC with the built-in hard disk. While hard disks had been around for years, the IBM XT made them visible to the vast PC market. Hard disks were expensive, so there was a lot of advertising and also a lot of articles about the benefits of hard disks.
Those benefits were: faster operations (booting PC-DOS, loading programs, reading and writing files, more secure because you can't lose the disk like a floppy, and more reliable compared to floppy disks.
The hard disk didn't kill floppy disks. They remained popular for some time. Floppy disks disappeared some time after Apple introduced the iMac G3 (in 1998). Despite Apple's move, floppies remained popular.
Floppy disks did gradually lose market share, and in 2010 Sony stopped manufacturing floppy disks. But hard disks remained a staple of computing.
Today, in 2023, the articles are now about replacing hard disks with solid-state disks. The benefits? Faster boot times, faster program loading times, faster reading and writing. (Sound familiar?) Reliability isn't an issue, nor is the possibility of losing media.
Apple again leads the market in moving from older storage technology. Their product line (from iPhones and iPads to MacBooks and iMacs) all use solid-state storage. Microsoft is moving in that direction too, pressuring OEMs and individuals to configure PCs to boot Windows from an internal SSD rather than a hard disk. It won't be long before hard disks are dropped completely and not even manufactured.
But consider: the hard disk (with various interfaces) was the workhorse of storage for PCs from 1983 to 2023 -- forty years.
Floppy disks (in the PC world) were used from 1977 to 2010, somewhat less than forty years. But they were used prior to PCs, so maybe their span was also forty years.
Does that mean that SSDs will be used for forty years? We've had them since 1978 (if you count the very early versions) but they moved into the main stream of computing in 2017 with Intel's Optane products. Forty years after 2017 puts us in 2057. But that would be the end of SSDs -- their replacement should arrive earlier than that, possibly fifteen years earlier.
Friday, May 27, 2022
The promise of Windows
Windows made a promise to run on various hardware, and allow different hardware platforms. This was a welcome promise, especially for those of us who liked computers other than the IBM PC.
At the time Windows was introduced, the IBM PC design was popular, but not universal. Some manufacturers had their own designs for PCs, different from the IBM PC. Those other PCs ran some software that was designed for IBM PCs, but not all software. The Victor 9000 and the Zenith Z-100 were PCs that saw modest popularity, running MS-DOS but with different specifications for keyboards, video, and input-output ports.
Some software was released in multiple versions, or included configuration programs, to match the different hardware. Lotus 1-2-3 had packages specific to the Z-100; WordStar came with a setup program to define screen and keyboard functions.
Buying hardware and software was a big effort. One had to ensure that the software ran on the hardware (or could be configured for it) and that the hardware supported the software.
Windows promised to simplify that effort. Windows would act as an intermediary, allowing any software (if it ran on Windows) to use any hardware (if Windows ran on it). Microsoft released Windows for different hardware platforms (including the Zenith Z-100). The implications were clear: Windows could "level the playing field" and make those other PCs (the ones not compatible with the IBM PC) useful and competitive.
That promise was not fulfilled. Windows ran on various computing hardware, but the buyers were trained to look for IBM PCs or compatibles, and they stayed with IBM PCs and compatibles. It didn't matter that Windows ran on different computers; people wanted IBM PCs, and they bought IBM PCs. The computers that were different were ignored and discontinued by their manufacturers.
And yet, Windows did keep its promise of separating software from hardware and allowing programs to run on different hardware. We can look at the history of Windows and see its growth over time, and the different hardware that it supported.
When USB was introduced, Windows supported it. (The implementation was rough at first, but Microsoft improved it.)
As displays improved and display adapters improved, Windows supported them. One can attach almost any display unit, and any display adapter to a PC and Windows can use them.
Printers and scanners have the same story. Windows supported lots of printers, from laserjets to inkjets to dot-matrix printers.
Much of this success is due to Microsoft and its clear specifications for adapters, displays, printers, and scanners. But those specifications allowed for growth and innovation.
Microsoft supported different processors, too. Windows ran on Intel's Itanium processors, and DECs Alpha processors. Even now Microsoft has support for ARM processors.
Windows did keep its promise, albeit in a way that we were not expecting.
Tuesday, March 22, 2022
Apple has a software problem
Apple has a problem. Specifically, Apple has a software problem. More specifically, Apple has a problem between its hardware and its software. The problem is that Apple's hardware is getting bigger and more powerful at a much faster rate than its software.
Why is better hardware a problem? By itself, better hardware isn't a problem. But hardware doesn't exist by itself -- we use hardware with software. Faster, more powerful hardware requires bigger, more capable software.
One might think that fast hardware is a good thing, regardless of software. Faster hardware runs software faster, right? And faster is always a good thing, right? Well, not always.
For software that operates on a large set of data, such that the user is waiting for the result, yes, faster hardware is better. But faster hardware is only better if the user is waiting. Software that operates in "batch mode" or with limited interaction with the user is improved with better hardware. But software that interacts with the user, software that must wait for the user to do something, isn't necessarily improved.
Consider the word processor, a venerable tool that has been with us since the introduction of the personal computer. Word processors spend most of their time waiting for the user to press a key. This was true even with computers from prior to the IBM PC. In the forty-odd years since, hardware has gotten much, much faster but word processors have not gotten much more complicated. (There was a significant increase in complexity when we shifted from DOS and its character-based display and printing to Windows and graphics-based display and printing, but very little otherwise.)
The user experience for word processors has changed little in that time. Faster hardware has not improved the experience. If we limited computers to word processors, there would be no need for a processor more powerful than the Intel 80386. By extension, if you needed a computer for word processing (and nothing else) today's bottom-of-the-line, cheap, minimal computer would be more than enough for you. There is no point in spending on a premium computer (or even a mediocre one) because the minimal computer can do the job adequately.
The same logic applies to spreadsheets. And e-mail. And web browsers. Computers have gotten better faster than these programs and their data have gotten bigger.
The computers I use for my day-to-day and even unusual tasks are old PCs, ranging from five to twenty years in age. All of them are fast enough for what I need to do. An aged Dell Inspiron N5010 runs Windows 10 and lets me use Zoom and Teams to join virtual meetings. I could replace it with a modern laptop, but the experience would be the same! Why should I bother?
A premium computer is needed only for those tasks that perform complex operations on large sets of data. And this is where Apple fails to provide the tools to justify its powerful (and expensive) hardware.
Apple is focussed on hardware, and it does a terrific job of designing and manufacturing powerful computers. But software is another story. Apple develops applications and then seems to lose interest in them. It built Pages, Numbers, and Keynote, and has made precious few improvements -- other than recompiling for ARM processors, or adding support for things like AirPlay. It hasn't added features.
The same goes for applications such as GarageBand, iTunes, and FaceTime. Even Xcode.
Apple has even let utilities such as grep and sed in MacOS (excuse me, "mac os". Or is it "macos"?) are aging with no updates. The corresponding GNU utilities have been modified and improved in various ways, to the point that developers now recommend the installation of the GNU utilities on Apple computers.
Apple may be waiting for others to build the applications that will take advantage of the latest Mac computers. I'm not sure that many will want to do that.
Building applications to leverage the new Apple processors may seem a no-brainer. But there are a number of disincentives.
First, Apple may build their own version of the application, and compete with the vendor. Independent vendors may be reluctant to enter a market when Apple is a possible competitor.
Second, developing applications to take advantage of the M1 architecture requires a lot of time and effort. The application must be multithreaded -- single-threaded applications cannot fully leverage the multiple cores on the M1. Designing, coding, and testing such an application is a lot of work.
Third, the market is limited. Applications developed to take advantage of the M1 processor are, well, developed for the M1 processor. They won't run on Windows PCs. You can't cross-build to run on Windows PCs, because PC processors are much slower than the Apple processors. The set of potential customers is limited to those who have the upper-end Apple computers. That's a small market (compared to Windows) and the potential revenue may not cover the development costs.
That is an intimidating set of disincentives.
So if Apple isn't building applications to leverage the upper-end Macs, and third-party developers aren't building applications to leverage upper-end Macs, then ... no one is building them! The result being that there are no applications (or very few) to take advantage of the higher-end processors.
I expect sales of the M1-based Macs to be robust, for a short time. But as people realize that their experience has not improved (except, perhaps for "buttery smooth" graphics) they will hesitate for the next round of new Macs (and iPads, and iPhones). As customers weigh the benefits and costs of new hardware, some will decide that their current hardware is good enough. (Just as my 15-year-old PCs are good enough for my simple word processing and spreadsheet needs.) If Apple introduces newer systems with even faster processors, more people will look at the price tags and decide to wait before upgrading.
Apple is set up to learn an important lesson: High-performance hardware is not enough. One needs the software to offer solutions to customers. Apple must work on its software.
Thursday, October 28, 2021
System-on-chips for everyone!
Apple has demonstrated that the system-on-chip design (seen in their new MacBooks, iMacs, and Mac Minis) is popular.
What does system-on-chip design mean for other forms of computing? Will other manufacturers adopt that design?
An obvious market for system-on-chip design is Chromebooks. (If they are not using it already.) Many Chromebooks already use ARM processors (others use Intel) and moving the ARM-based Chromebooks to ARM-based system-on-chip design is fairly straightforward. Chromebooks also have a narrow design specification, controlled by Google, which makes a system-on-chip design feasible. Google limits the variation of Chromebooks, so it may be that the entire Chromebook market could be served with three (or possibly four) distinct designs.
Chromebooks would benefit from system-on-chip designs in two ways: lower cost and higher performance. One may think performance is unimportant to Chromebooks because Chromebooks are merely hosts for the Chrome browser, but that is not true. The Chrome browser (indeed, any modern browser) must do a lot, from rendering HTML to running JavaScript to playing audio and video. They must also handle keystrokes and focus, tasks normally associated with an operating systems's window manager. In addition, browsers must now execute web-assembly (WASM) for some applications. Browsers are complex critters.
Google also has their eyes on games, and improved performance will allow more Chromebooks to run advanced games.
I think we can safely assume that Chromebooks will move to system-on-chip designs.
What about Windows PCs? Will they change to system-on-chip designs? Here I think the answer is not so obvious.
Microsoft sets hardware specifications for Windows. If you want to build a PC that runs Windows, you have to conform to those specifications. It is quite possible that Microsoft will design their own system-on-chip for PCs and use them in Microsoft's own Surface tablets and laptops. It is possible that they will make the design available to other manufacturers (Dell, Lenovo, etc.). Such a move would make it easier to build PCs that conform to Microsoft's specifications.
A system-on-chip design would possibly split designs for PCs into two groups: system-on-chip in one group and traditional discrete components in the other. System-on-chip designs work poorly with expansion slots, so PCs that use such a design would probably have no expansion slots -- not even one for a GPU. But many folks want GPUs, so they will prefer traditional designs. We may see a split market for Windows PCs, with customizable PCs using discrete components and non-upgradable PCs (similar to Chromebooks and Macbooks) using system-on-chip designs.
Such a split has already occurred in the Windows PC market. Laptop PCs tend to have limited options for upgrades (if any). Small desktop PCs also have limited options. Large desktops are the computers that still have expansion slots; these are the computers that let the owner replace components such as RAM and storage.
I think system-on-chip designs are the way of the future for most of our computers (laptops, desktops, phones, etc.). I think we'll see better performance, lower cost, and improved reliability. It's a move in a good direction.
Sunday, October 3, 2021
Windows and Linux are not the same
We like to think that operating systems are commodities, that Windows performs just as well as mac OS, and they both perform as well as Linux. I'm not sure about mac OS, but I can think of one significant difference between Windows and Linux, and that difference may affect the lifespan of the hardware.
Specifically, the difference in Windows and Linux may affect the hard disk drive, when it is an SSD (solid state disk). SSDs have a limited lifespan, in the number of reads and writes. This is important because Windows and Linux show different behavior with disk activity.
My experience is that Linux has minimal disk activity. Linux loads, creates a login session, does a few more things (I suspect that it runs 'apt' for an update) and then sits and waits. No disk activity.
Windows is quite different. It loads and creates a login session (just like Linux). But then it keeps doing things. Computers with disk activity lights show this activity. (Is Windows downloading updates from Microsoft servers? Checking for malware? I don't know. But it's doing something.) And it does this for at least 30 minutes.
That's before I log in to Windows, and before I launch any applications, or check my e-mail, or visit web sites.
After I log in, Windows does more. One can see the disk activity (on PCs that have status lights). When I check the CPU usage (as shown by Task Manager), I see lots of different tasks, many with vague names such as "Local Service".
Not all of this is caused by Microsoft. My work client has supplied a laptop that runs Splunk, McAfee, and a few other third-party applications (all installed by my client) and they wake up and do things every few minutes or so. All day long.
The immediate thought from this disk activity is: this cannot be good for SSDs. Each read operation and each write operation chips away at the lifespan of the SSD. (Old-style spinning hard disks are much less susceptible to this effect.)
The constant activity in Windows means that Windows will "consume" an SSD much quicker than Linux.
I certain that Microsoft is aware of this issue. I'm guessing that there is not much that they can do about it. Windows was designed to run lots of tasks on start-up, and throughout the day. (Also, it's not really Microsoft's problem. The fact that Windows "burns out" SSDs means that people will replace the disks, or possibly replace the whole PC. People will view this problem as a problem of hardware, not a problem with Windows.)
I tend to keep computers for a long time. For computers that run Windows, I look for systems that use the older hard disks and not SSDs. That's my strategy. Let's see how it works!
Wednesday, February 19, 2020
A server that is not a PC
First, let's consider PCs. PCs have lots of parts, from processor to memory to storage, but the one thing that makes a PC a PC is the video. PCs use memory-mapped video. They dedicate a portion of memory to video display. (Today, the dedicated memory is a "window" into the much larger memory on the video card.)
Which is a waste, as servers do not display video. (Virtual machines on servers do display video, but it is all a game of re-assigned memory. If you attach a display to a server, it does not show the virtual desktop.)
Suppose we made a server that did not dedicate this memory to video. Suppose we created a new architecture for servers, an architecture that is exactly like the servers today, but with no memory reserved for video and no video card.
Such a change creates two challenges: installing an operating system and the requirements of the operating system.
First, we need a way to install an operating system (or a hypervisor that will run guest operating systems). Today, the process is simple: attach a keyboard and display to the server, plug in a bootable USB memory stick, and install the operating system. The boot ROM and the installer program both use the keyboard and display to communicate with the user.
In our new design, they cannot use a keyboard and display. (The keyboard would be possible, but the server has no video circuitry.)
My first though was to use a terminal and attach it to a USB port. A terminal contains the circuitry for a keyboard and display; it has the video board. But no such devices exist nowadays (outside of museums and basements) and asking someone to manufacture them would be a big ask. I suppose one could use a tiny computer such as a Raspberry Pi, with a terminal emulator program. But that solution is merely a throwback to the pre-PC days.
A second idea is to change the server boot ROM. Instead of presenting messages on a video display, and accepting input from a keyboard, the server could run a small web server and accept requests from the network port. (A server is guaranteed to have a network port.)
The boot program could run a web server, just as network routers allow configuration with built-in web servers. When installing a new server, one can simply attach it to your network and then connect to it via SSH.
Which brings us to the next challenge: an operating system. (Or a hypervisor.)
Today, servers run PC operating systems. Hypervisors (such as Microsoft's Hyper-V) are nothing more than standard PC operating systems that have been tweaked to support guest operating systems. As such, they expect to find a video card.
Since our server does not have a video card, these hypervisors will not work properly (if at all). They will have to be tweaked again to run without a video card. (Which should be somewhat easy, as hypervisors do not use the video card for their normal operation.)
Guest operating systems may be standard, unmodified PC operating systems. They want to see a video card, but the hypervisor provides virtualized video cards, one for each instance of a guest operating system. The guest operating systems never see the real video card, and don't need to know that one is present -- or not.
What's the benefit? A simpler architecture. Servers don't need video cards, and in my opinion, shouldn't have them.
Which would, according to my "a PC must have memory-mapped video" rule, make servers different from PCs. Which I think we want.
The video-less server is an idea, and I suspect that it will remain an idea. Implementing it requires special hardware (a PC minus the video circuitry that it normally has) and special software (an operating system that doesn't demand a video card) and special boot ROM (one that talks over SSH). As long as PCs are dominant, our servers will simply be PCs with no display attached.
But if the market changes, and PCs lose their dominance, then perhaps one day we will see servers without video cards.
Thursday, September 19, 2019
The PC Reverse Cambrian Explosion
Personal Computers have what I call a "PC Reverse Cambrian Explosion" or PC-RCE. It occurred in the mid-1980s, which some might consider to be half a billion year ago. In the PC-RCE, computers went from hundreds of different designs to one: the IBM PC compatible.
In the late 1970s and very early 1980s, there were lots of designs for small computers. These included the Apple II, the Radio Shack TRS-80, the Commodore PET and CBM machines, and others. There was a great diversity of hardware and software, including processors and operating systems. Some computers had floppy disks, although most did not. Many computers used cassette tape for storage, and some had neither cassette nor floppy disk. Some computers had built-in displays, and others required that you get your own terminal.
By the mid 1980s, that diversity was gone. The IBM PC was the winning design, and the market wanted that design and only that design. (Except for a few stubborn holdouts.)
One might think that the IBM PC caused the PC-RCE, but I think it was something else.
While the IBM PC was popular, other manufacturers could not simply start making compatible machines (or "clones" as they were later called). The hardware for the IBM PC was "open" in that the connectors and buss specification were documented, and this allowed manufacturers to make accessories for IBM PCs. But the software (the operating system and importantly the ROM BIOS) was not open. While both had documentation for the interfaces, they could not be copied without running afoul of copyright law.
Other computer manufacturers could not make IBM PC clones. Their choices were limited to 1) sell non-compatible PCs in a market and did not want them, or 2) go into another business.
Yet we now have many vendors of PCs. What happened?
The first part of the PC-RCE was the weakening of the non-IBM manufacturers. Most went out of business. (Apple survived, by offering compelling alternate designs and focussing on the education market.)
The second part was Microsoft's ability to sell MS-DOS to other manufacturers. It made custom versions for non-compatible hardware by Tandy, Victor, Zenith, and others. While "compatible with MS-DOS" wasn't the same as "compatible with the IBM PC", it allowed other manufacturers to use MS-DOS.
A near-empty market allowed upstart Compaq to introduce its Compaq portable, which was the first system not made by IBM and yet compatible with the IBM PC. It showed that there was a way to build IBM PC "compatibles" legally and profitably. Compaq was successful because it offered a product not available from IBM (a portable computer) that was also compatible (it ran popular software) and used premium components and designs to justify a hefty price tag. (Several thousand dollars at the time.)
The final piece was the Phoenix BIOS. This was the technology that allowed other manufacturers to build compatible PCs at low prices. Compaq had built their own BIOS, making it compatible with the API specified in IBM's documents, but it was an expensive investment. The Phoenix BIOS was available to all manufacturers, which let Phoenix amortize the cost over a larger number of PCs, for a lower per-unit cost.
The market maintained demand for the IBM PC design, but it wasn't fussy about the manufacturer. Customers bought "IBM compatible PCs" with delight. (Especially if the price was lower than IBM's.)
Those events (weakened suppliers, an operating system, a legal path forward, and the technology to execute it) made the PC the one and only design, and killed off the remaining designs. (Again, except for Apple. And Apple came close to extinction on several occasions.)
Now, this is all nice history, but what does it have to do with us folks living today?
The PC-RCE gave us a single design for PCs. That design has evolved over the decades, and just about every piece of the original IBM PC has mutated into something else, but the marketed PCs have remained uniform. At first, IBM specified the design, with the IBM PC, the IBM PC XT, and the IBM PC AT. Later, Microsoft specified the design with its "platform specification" for Windows. Microsoft could do this, due to its dominance of the market for operating systems and office software.
Today, the PC design is governed by various committees and standards organizations. They specify the design for things like the BIOS (or its replacement the UEFI), the power supply, and connectors for accessories. Individual companies have sway; Intel designs processors and support circuitry used in all PCs. Together, these organizations provide a single design which allows for modest variation among manufacturers.
That uniformity is starting to fracture.
Apple's computers joined the PC design in the mid-2000s. The "white MacBook" with an Intel processor was a PC design -- so much so that Windows and Linux can run on it. Yet today, Apple is moving their Macs and MacBooks in a direction different from the mainstream market. Apple-designed chips control certain parts of their computers, and these chips are not provided to other manufacturers. (Apple's iPhones and iPads are unique designs, with no connection to the PC design.)
Google is designing its Chromebooks and slowing moving them away from the "standard" PC design.
Microsoft is building Surface tablets and laptops with its proprietary designs, close to PCs yet not quite identical.
We are approaching a time when we won't think of PCs as completely interchangeable. Instead, we will think of them in terms of manufacturers: Apple PCs, Microsoft PCs, Google PCs, etc. There will still be a mainstream design; Dell and Lenovo and HP want to sell PCs.
The "design your own PC" game is for serious players. It requires a significant investment not only in hardware design but also in software. Apple has been playing that game all along. Microsoft and Google are big enough that they can join. Other companies may get involved, using Linux (or NetBSD as Apple did) as a base for their operating systems.
The market for PCs is fragmenting. In the future, I see a modest number of designs, not the thousands that we had in 1980. The designs will be similar but not identical, and more importantly, not compatible - at least for hardware.
A future with multiple hardware platforms will be a very different place. We have enjoyed a single (evolving) platform for the past four decades. A world with multiple, incompatible platforms will be a new experience for many. It will affect not only hardware designers, but everyone involved with PCs, from programmers to network administrators to purchasing agents. Software may follow the fragmentation, and we could see applications that run on one platform and not others.
A fragmented market will hold challenges. Once committed to one platform, it is hard to move to a different platform. (Just as it is difficult today to move from one operating system to another.) Instead of just the operating system, one will have to change the hardware, operating system, and possibly applications.
It may also be a challenge for Linux and open source software. They have used the common platform as a means of expansion. Will we see specific versions of Linux for specific platforms? Will Linux avoid some platforms as "too difficult" to implement? (The Apple MacBooks, with their extra chips for security, may be a challenge for Linux.)
The fragmentation I describe is a possible future -- its not here today. I wouldn't panic, but I wouldn't ignore it, either. Keep buying PCs, but keep your eyes on them.
Tuesday, March 6, 2018
My Technology is Old
For most of my work, I have a ten-year-old generic tower PC, a non-touch (and non-glare) 22-inch display, and a genuine IBM Model M keyboard.
The keyboard (a Model M13, to be precise) is the olds-tyle "clicky" keyboard with a built-in TrackPoint nub that emulates a mouse. It is, by far, the most comfortable keyboard I have used. It's also durable -- at least thirty years old and still going, even after lots of pounding. I love the shape of the keys, the long key travel (almost 4 mm), and the loud clicky sound on each keypress. (Officemates are not so fond of the last.)
For other work, I use a relatively recent HP laptop. It also has a non-glare screen. The keyboard is better than most laptop keyboards these days, with some travel and a fairly standard layout.
I prefer non-glare displays to the high-gloss touch displays. The high-gloss displays are quite good as mirrors, and reflect everything, especially lamps and windows. The reflections are distracting; non-glare displays prevent such disturbances.
I use an old HP 5200C flatbed scanner. Windows no longer recognizes it as a device. Fortunately, Linux does recognize it and lets me scan documents without problems.
A third workstation is an Apple Powerbook G4. The PowerBook is the predecessor to the MacBook. It has a PowerPC processor, perhaps 1GB of RAM (I haven't checked in a while), and a 40 GB disk. As a laptop, it is quite heavy, weighing more than 5 pounds. Some of the weight is in the battery, but a lot is in the case (aluminum), the display, and the circuits and components. The battery still works, and provides several hours of power. It holds up better than my old MacBook, which has a battery that lasts for less than two hours. The PowerBook also has a nicer keyboard, with individually shaped keys as opposed to the MacBooks flat keycaps.
Why do I use such old hardware? The answer is easy: the old hardware works, and in some ways is better than new hardware.
I prefer the sculpted keys of the IBM Model M keyboard and the PowerBook G4 keyboard. Modern systems have flat, non-sculpted keys. They look nice, but I buy keyboards for my fingers, not my eyes.
I prefer the non-glare screens. Modern systems provide touchscreens. I don't need to touch my displays; my work is with older, non-touch interfaces. A touchscreen is unnecessary, and it brings the distracting high-glare finish with it. I buy displays for my eyes, not my fingers.
Which is not to say that my old hardware is without problems. The PowerBook is so old that modern Linux distros can run only in text mode. This is not a problem, as I have several projects which live in the text world. (But at some point soon, Linux distros will drop support for the PowerPC architecture, and then I will be stuck.)
Could I replace all of this old hardware with shiny new hardware? Of course. Would the new hardware run more reliably? Probably (although the old hardware is fairly reliable.) But those are minor points. The main question is: Would the new hardware help me be more productive?
After careful consideration, I have to admit that, for me and my work, new hardware would *not* improve my productivity. It would not make me type faster, or write better software, or think more clearly.
So, for me, new hardware can wait. The old stuff is doing the job.
Sunday, September 13, 2015
We program to the interface, thanks to Microsoft
We don't worry about compatibility.
It wasn't always this way.
When IBM delivered the first PC, it provided three levels of interfaces: hardware, BIOS, and DOS. Each level was capable of some operations, but the hardware level was the fastest (and some might argue the most versatile).
Early third-party applications for the IBM PC were programmed against the hardware level. This was an acceptable practice, as the IBM PC was considered the standard for computing, against which all other computers were measured. Computers from other vendors used different hardware and different configurations and were thus not compatible. The result was that the third-party applications would run on IBM PCs and IBM PCs only, not on systems from other vendors.
Those early applications encountered difficulties as IBM introduced new models. The IBM PC XT was very close to the original PC, and just about everything ran -- except for a few programs that made assumptions about the amount of memory in the PC. The IBM PC AT used a different keyboard and a different floppy disk drive, and some software (especially those that used copy-protection schemes) would not run or sometimes even install. The EGA graphics adapter was different from the original CGA graphics adapter, and some programs failed to work with it.
The common factor in the failures of these programs was their design: they all communicated directly with the hardware and made assumptions about it. When the hardware changed, their assumptions were no longer valid.
We (the IT industry) eventually learned to write to the API, the high-level interface, and not address hardware directly. This effort was due to Microsoft, not IBM.
It was Microsoft that introduced Windows and won the hearts of programmers and business managers. IBM, with its PS/2 line of computers and OS/2 operating system struggled to maintain control of the enterprise market, but failed. I tend to think that IBM's dual role in supplying hardware and software helped in that demise.
Microsoft supplied only software, and it sold almost no hardware. (It did provide things such as the Microsoft Mouse and the Microsoft Keyboard, but these saw modest popularity and never became standards.) With its focus on software, Microsoft made its operating system run on various hardware platforms (including processors such as DEC's Alpha and Intel's Itanium) and Microsoft focussed on drivers to provide functionality. Indeed, one of the advantages of Windows was that application programmers could avoid the effort of supporting multiple printers and multiple graphics cards. Programs would communicate with Windows and Windows would handle the low-level work of communicating with hardware. Application programmers could focus on the business problem.
The initial concept of Windows was the first step in moving from hardware to an API.
The second step was building a robust API, one that could perform the work necessary. Many applications on PCs and DOS did not use the DOS interface because it was limited, compared to the BIOS and hardware interfaces. Microsoft provided capable interfaces in Windows.
The third step was the evolution of Windows. Windows 3 evolved into Windows 3.1 (which included networking), Windows 95 (which included a new visual interface), and Windows 98 (which included support for USB devices). Microsoft also developed Windows NT (which provided pre-emptive multitasking) and later Windows 2000, and Windows XP.
With each generation of Windows, less and less of the hardware (and DOS) was available to the application program. Programs had to move to the Windows API (or a Microsoft-supplied framework such as MFC or .NET) to keep functioning.
Through all of these changes, Microsoft provided specifications to hardware vendors who used those specifications to build driver programs for their devices. This ensured a large market of hardware, ranging from computers to disk drives to printers and more.
We in the programming world (well, the Microsoft programming world) think nothing of "writing to the interface". We don't look to the hardware. When faced with a new device, our first reaction is to search for device drivers. This behavior works well for us. The early marketing materials for Windows were correct: application programmers are better suited to solving business problems than working with low-level hardware details. (We *can* deal with low-level hardware, mind you. It's not about ability. It's about efficiency.)
Years from now, historians of IT may recognize that it was Microsoft's efforts that lead programmers away from hardware and toward interfaces.
Tuesday, July 2, 2013
Limits to growth in mobile apps
This increase in complexity was possible due to improving technology (powerful processors, more memory, higher screen resolutions, faster network connections) and necessary due to competition (in some markets, the product with the larger checklist of features wins).
The pattern of increasing the complexity of an application has been with us since the very first PC applications (which I consider Wordstar, Microsoft BASIC, and Visicalc). I suspect that the pattern existed in the minicomputer and mainframe markets too.
The early era of microcomputers (before the IBM PC in 1981) had hard limits on resources. Processors were only so fast. You had only so much memory. Hobbyist computers such as the COSMAC ELF had tiny processors and 256 bytes of memory. (Notice that the word 'bytes' has no letter in front.) The complexity of applications was limited by hardware and the cleverness of programmers.
PC technology changed the rules. Hardware became powerful enough to handle complex tasks. Moreover, the design of the IBM PC was expandable, and manufacturers provided bigger and more capable machines. The limit to application growth was not the hardware but the capacity of the development team. Programmers needed not to be clever but to work in teams and write easily-understood code.
With the expansion of hardware and the expansion of development teams, the bounding factor for software was the capacity of the development team. This bounding factor eventually changed to the funding for the development team.
With funding as the limiting factor, a company could decide the level of complexity for software. A company could build an application as complex as it wanted. Yes, a company could specify a project more complex than the hardware would allow, but in general companies lived within the "envelope" of technology. That envelope was moving upwards, thanks to the PC's expandable design.
Mobile technology changes the rules again, and requires a new view towards complexity. Mobile phones are getting more powerful, but slowly, and their screens are at a practical maximum. A smart phone screen size can be a maximum of 5 inches, and that is not going to change. (A larger screen pushes the device into tablet territory.)
Tablets also are getting more powerful, but also slowly and their screens are also at a practical maximum. A tablet screen can be as large as 10 inches, and that is not going to change. (A larger screen makes for an unwieldy tablet.)
These practical maximums place limits on the complexity of the user interface. Those limits enforce limits on the complexity of the app.
More significantly, the limits in mobile apps come from hardware, not funding. A company cannot assume that they can expand an app forever. The envelope of technology is not moving upwards, and cannot move upwards: the limits are cause by human physiology.
All of this means that the normal process of application development must change. The old pattern of a "1.0 release" with limited functionality and subsequent releases with additional functionality (on a slope that continues upwards) cannot work for mobile apps. Once an app has a certain level of complexity, the process must change. New features are possible only at the expense of old features.
We are back in the situation of the early microcomputers: hard limits on application complexity. Like that earlier era, we will need cleverness to work within the limits. Unlike that earlier era, the cleverness is not in memory allocation or algorithmic sophistication, but in the design of user interfaces. We need cleverness in the user experience, and that will be the limiting factor for mobile apps.
Saturday, March 2, 2013
Any keyboard you like
Being of a certain age, my first experience with keyboards was not with a computer but with a typewriter. It was my parents' portable, manual typewriter; I have forgotten the brand. It was hard to use and it smelled of ink and machine oil. Yet it was a fun introduction to the keyboard.
Typewriters were fun, and computers were more fun. The keyboards were more modern, and had more keys (some of which made little sense to me).
I have used several keyboards, and the most enjoyable were the DEC keyboards. DEC keyboards were sleek and sophisticated compared to the other keyboards (Teletype ASR-33, Lear-Siegler ADM-3A, and IBM 3270). I also enjoyed the Zenith Z-100 keyboard (sculpted like an IBM Selectric typewriter) and the IBM Type M keyboard.
Typing on a good keyboard is a joy. Typing on a mediocre keyboard is not.
Sadly, today's PCs cannot use these venerable keyboards. Desktop PCs want to talk to a keyboard through USB, and tablets want bluetooth.
Yet all is not lost. Virtual keyboards may help.
Not the on-screen virtual keyboards of smart phones and tablets, but a different form of virtual keyboard. A keyboard that is drawn (usually with lasers) on a surface, with an accompanying scanner to detect "keypresses".
It strikes me that these keyboards can be used on a variety of surfaces. I'm hoping that some will be programmable (or at least configurable) so that I can create my own layout. (For example, I want the "Control" key on the ASDF home row.) I also have preferences for the arrow, HOME, and END keys. A virtual keyboard should allow for re-positioning of the keys.
Re-positioning the keys is nice, but it doesn't let me use the old keyboard. The surface is still a flat, unyielding surface with no feedback.
But do scanners really care about a flat surface? Can they be used on a lumpy surface? (I'm sure that some are quite fussy, and require a flat surface. But perhaps some are less fussy.)
If a virtual keyboard can be used on a flat surface, and I can re-program the key layouts... then perhaps I can configure the virtual keyboard to emulate an old-style keyboard (say, the DEC VT-52). And perhaps I can use the virtual keyboard on a lumpy surface... say, a DEC VT-52 keyboard.
Such an arrangement would let me use any keyboard I wanted with my computer. The virtual keyboard would do the work, and wouldn't care that I happened to rest my fingers on an old, outdated keyboard.
I would like that arrangement. It would give me the layout and feel of the keyboard of my choice. I wouldn't have to compromise with the current set of keyboards. All I need is a programmable virtual keyboard and a real keyboard that I enjoy.
Now, where is that old Zenith Z-100?