Showing posts with label apps. Show all posts
Showing posts with label apps. Show all posts

Thursday, April 2, 2015

Mobile operating systems break the illusion of control

The mobile operating systems iOS and Android are qualitatively different from previous operating systems like Windows, MacOS, and Linux. They break the illusion that an application has control; this illusion has been with us since the first operating systems. To understand the illusion, and how mobile operating systems are different, we must understand the origins of operating systems.

The dawn of computers saw the hardware emerge from electro-mechanical relays, but no software. The first computers were programmed with wires connecting components to other components. A computer was a custom-purpose device, designed and built to perform one calculation. Shortly after, the "programming wires" were isolated to removable boards, which allowed a "program" to be removed from the computer and stored for later use.

The first programs (in the sense we know them) were sequences of numerical values that could be loaded into a computers memory. They were a softer variant of the wired plug-board in the earlier computers. Building the sequence of numerical values was tedious; one had to understand not only the problem to be solved but also the processor instruction set. These sequences are now called "machine language".

Programmers, being what they are, developed programs to ease the chore of creating the sequences of machine language values. These programs were the first assemblers; they converted symbols into executable sequences. A programmer could work with the much easier to understand symbolic code and convert the symbols to a program when his changes were done.

Up to this point, the operation of the computer has been a simple one. Create a program, insert it into the computer, and let it run. The program instructed the computer, and the computer performed the calculations.

There were no operating systems. It was the computer and the program, alone together in the small universe of computing hardware. The program was in charge and the computer obeyed its instructions. (Blindly and literally, which meant that the programmer had to be precise and complete in his description of the operations. That aspect of programming remains with us today.)

The first operating systems were little more than loaders for programs. Programmers found that the task of loading an executable program was a chore, and programmers, being what they are, creating programs to ease that task. A loader could start with a collection of programs (usually stored in a deck of punch cards), load the first one, let it run, and then load and run the next program.

Of course, the loader was still in memory, and the loaded programs could not use that memory or overwrite the loader. If they did, the loader would be unable to continue. I imagine that the very earliest arrangement worked by an agreement: the loader would use a block of addresses and the loaded programs would use other memory but not the block dedicated to the loader. The running program was still "in charge" of the computer, but it had to honor the "loader agreement".

This notion of "being in charge" is important. It is a notion that has been maintained by operating systems -- up to mobile operating systems. More on that later.

Operating systems grew out of the concept of the loader. They became more powerful, allowing more sharing of the expensive computer. The assumed the following functions:

  • Allocation and protection of memory (you can use only what you are assigned)
  • Control of physical devices (you must request operations through the operating system)
  • Allocation of CPU (time slices)

These are the typical functions we associate with operating systems.

Over the years, we have extended operating systems and continued to use them. Yet in all of that time, from IBM's System/360 to DEC's VMS to Microsoft's Windows, the understanding is that our program (our application), once loaded, is "in control" until it exits. This is an illusion, as our application can do very little on its own. It must request all resources from the operating system, including memory. It must request all actions through the operating system, including the operating on devices (display a window on a screen, send text to a printer, save a file to disk).

This illusion persists, I believe due to the education of programmers (and system operators, and computer designers). Not merely through formal training but also through informal channels. Our phrases indicate this: "Microsoft Word runs and prints a document" or "Visual Studio builds the executable" or "IIS serves the HTML document". Our conversations reinforce the belief that the running computer is in control.

And here is where the mobile operating systems come in. Android and iOS have very clear roles for application programs, and those roles are subservient to the operating system. A program does not run in the usual sense, making requests of the operating system. Instead, the app (and I will use the term "app" to indicate that the program is different from an application) is activated by the operating system when needed. That activation sees the operating system instruct the app to perform a task and then return control to the operating system.

Mobile operating systems turn the paradigm of the "program in control" inside-out. Instead of the program making requests of the operating system, the operating system makes requests of the program. The operating system is in control, not the program.

This view is very different from our traditional view, yet it is an accurate one. Apps are not in control. Applications are not in control -- and have not been for many years.

Thursday, December 20, 2012

The Cheapening of IT

The prices for computing equipment, over the years, have moved in one direction: down. I believe that the decrease in prices for hardware has an affect on our willingness to pay for software.

In the early 1960s, a memory expansion for the IBM 1401 provided 8K of what we today call RAM, at a price of $258,000. That was only the expansion pack of memory; the entire system cost several times that amount. With an investment of over a million dollars for hardware, an additional investment of several tens of thousands of dollars for software was quite the bargain.

In 1977, a Heathkit 8-bit microcomputer with an 8080 processor, 4K of RAM, and a cassette tape recorder/player (used for long-term storage prior to floppy disks), cost almost $1500. Software for such a computer ran from $20 (for a simple text editor) to $400 (for the Microsoft COBOL compiler).

Today, smart phone or tablet costs range from $200 to $1000. (Significantly less than the Heathkit 8-bit system, once you account for inflation.) Tablet apps can cost as much as $10. Some are more, and some are free.

What affect does this decrease in the hardware cost have on the cost of software?

Here's my theory: as the cost of hardware decreases, the amount that we are willing to pay for software also decreases. I can justify spending $400 for software when the hardware costs several times that amount. But I have a harder time spending $400 on software when the hardware costs less than that. My bias is for hardware, and I am assigning higher intrinsic value to the hardware than the software. (The reasons behind this are varied, from the physical nature of hardware to the relationship with the vendor. I'm pretty sure that one could find a Master's thesis in this line of study.)

But if a cheapening of the hardware leads to a cheapening of the software, how does that change the industry? Assuming that the theory is true, we should see downward pressure on the cost of applications. And I think that we have seen this. The typical phone and tablet app holds a retail price that is significantly less than the price for a typical desktop PC application. "Angry Birds" costs only a fraction of the price of Microsoft Office.

I expect that this cost bias will extend to PC apps that move to tablets. Microsoft Word on the Surface will be priced at under $40 (perhaps as an annual subscription) and possibly less. The initial release of the Surface includes a copy of Word, although it is restricted to non-commercial use.

I also expect that the price of desktop PC apps will fall, keeping close to the prices of tablet apps. Why spend $400 for Word on the PC when one can get it for $40 on the tablet? The reduced price of apps on one platform drives down the price of apps on all platforms.

The cheapening affect may go beyond off-the-shelf PC applications. As the prices of desktop applications fall, we may see pressure to reduce the price of server-based systems, or server components of multiplatform systems. Again, this will be driven not by technology but by psychology: I cannot justify a multi-thousand dollar cost for a server component when the corresponding desktop applications have low costs. The reduced prices of desktop applications drives down the prices of equivalent server applications. Not all server applications, mind you; only the server applications that have desktop equivalents, and only then when those desktop equivalents are reduced in price to match tablet apps.

The general reduction of prices for desktop and server applications may create difficulties for the big consulting shops. These shops charge high prices for the development of custom applications for businesses. Psychology may cause headaches for their sales teams: why should I spend hundreds of thousands of dollars on a custom app (which includes clients for desktop PCs, tablets, and smartphones, of course) when I can see that powerful, competent apps are marketed for less than $10 per user? While there is value is a custom application, and while a large company may need many "downloads" for their many users, the argument for such high prices becomes difficult. Is a custom app really adding that much value?

Look for the large consulting houses to move into new technologies such as cloud and "big data" as ways of keeping their rates high. By selling these new technologies, the consulting houses can offer something that is not readily apparent in the off-the-shelf apps. (At least until their customers figure out that the off-the-shelf apps are also using cloud and "big data" tech.)

All of this leads to downward pressure on the prices of apps, whether they are simple games or complex systems. That pressure, in turn, will put downward pressure on development costs and upward pressure for productivity. Where a project was run with a project manager, three tech leads, ten developers, three testers, two analysts, and a technical writer, future projects may be run with a significantly smaller team. Perhaps the team will consist of one project manager, one tech lead, three developers, and one analyst. I'm afraid the "do more with less" exhortation will be with us for a while.

Wednesday, March 14, 2012

Why apps are not web pages

Apps are not web pages. They are constructed differently, they perform differently (although some web pages made for smart phones are quite close to apps in behavior), and I believe that we perceive them differently.

Apps (especially when used on cell phones) are more intimate than web pages. Web pages live in a browser, which in turn lives in a PC or laptop. Apps live in our phones. Apps are closer to us.

Perhaps this is because we hold the cell phone in our hand (tablets too) but PCs we leave on the desk. Laptops are not as intimate as tablets, since we rarely hold laptops -- I put mine on a convenient desk or shelf, or maybe the floor.

The intimacy affects our expectations of apps. When I use an app, I expect two things: content tailored for me, and frequent changes to that content. I don't expect it of web pages. (Some pages, if I log in to a web site. But not all web sites.)

Facebook is a good example: it shows me "feeds" from my friends. This information is tailored for me (it's from the people I pick as friends) and the information changes daily (more than daily, actually).

But it's not just Facebook.

Yahoo Mail: Information that people have sent to me, or e-mail from lists to which I have subscribed.

Maps: Tailored for me, since it shows the local area. I can shift to a different area, if I choose.

Twitter: Close to Facebook in that it is tweets from people I choose to follow, with changes every hour.

The New York Times: Not tailored for me (the news is the same for everyone) but it changes daily and I pick the sections to display.

These web sites are "naturals" for conversion to apps. (And, in fact, they have been converted to apps.)

But some web sites will never "fly" as apps. These are the web sites that are generic (not personalized) and static (infrequent changes). These are sites that one visits rarely, and expects no personalized content. Sites such as "brochureware" for hotels and resorts, shopping sites, and even Wikipedia. (Wikipedia has an excellent web site for cell phones, but I don't see a need for an app.)

If I am right, then the great shift from the web to app will leave a large number of web sites behind. Even if their owners convert web site into an app, few people will download the app and fewer will use it.

If the personalizable web sites are "raptured" into apps, then what happens to the "left behind"? I see a need for the static, generic web sites -- smaller than the need for an app, but a need nonetheless -- and that need must be met. Will we keep the browser in our cell phones and tablets? Or will we build a new form to distribute static and non-personalized content?

Wednesday, October 26, 2011

Small is the new big thing

Applications are big, out of necessity. Apps are small, and should be.


Applications are programs that do everything you need. Microsoft Word and Microsoft Excel are applications: They let you compose documents (or spreadsheets), manipulate them, and store them. Visual Studio is an application: It lets you compose programs, compile them, and test them. Everything you need is baked into the application, except for the low-level functionality provided by the operating system.

Apps, in contrast, contain just enough logic to get the desired data and present it to the user.



A smartphone app is not a complete application; except for the most trivial of programs, it is the user interface to an application.

The Facebook app is a small program that talks to Facebook servers and presents data. Twitter apps talk to the Twitter servers. The New York Times talks to their servers. Simple apps such as a calculator app or rudimentary games can run without back-ends, but I suspect that popular games like "Angry Birds" store data on servers.


Applications contained everything: core logic, user interface, and data storage. Apps are components in a larger system.

We've seen distributed systems before: client-server systems and web applications divide data storage and core logic from user interface and validation logic. These application designs allowed for a single front-end; current system design allows for multiple user interfaces: iPhone, iPad, Android, and web. Multiple front ends are necessary; there is no clear leader, no "IBM PC" standard.

To omit a popular platform is to walk away from business.

Small front ends are better than large front ends. A small, simple front end can be ported quickly to new platforms. It can be updated more rapidly, to stay competitive. Large, complex apps can be ported to new platforms, but as with everything else, a large program requires more effort to port.

Small apps allow a company to move quickly to new platforms.

With a dynamic market of user interface devices, an effective company must adopt new platforms or face reduced revenue. Small user interfaces (apps) allow a company to quickly adopt new platforms.

If you want to succeed, think small.