Showing posts with label software. Show all posts
Showing posts with label software. Show all posts

Tuesday, March 22, 2022

Apple has a software problem

Apple has a problem. Specifically, Apple has a software problem. More specifically, Apple has a problem between its hardware and its software. The problem is that Apple's hardware is getting bigger and more powerful at a much faster rate than its software.

Why is better hardware a problem? By itself, better hardware isn't a problem. But hardware doesn't exist by itself -- we use hardware with software. Faster, more powerful hardware requires bigger, more capable software.

One might think that fast hardware is a good thing, regardless of software. Faster hardware runs software faster, right? And faster is always a good thing, right? Well, not always.

For software that operates on a large set of data, such that the user is waiting for the result, yes, faster hardware is better. But faster hardware is only better if the user is waiting. Software that operates in "batch mode" or with limited interaction with the user is improved with better hardware. But software that interacts with the user, software that must wait for the user to do something, isn't necessarily improved.

Consider the word processor, a venerable tool that has been with us since the introduction of the personal computer. Word processors spend most of their time waiting for the user to press a key. This was true even with computers from prior to the IBM PC. In the forty-odd years since, hardware has gotten much, much faster but word processors have not gotten much more complicated. (There was a significant increase in complexity when we shifted from DOS and its character-based display and printing to Windows and graphics-based display and printing, but very little otherwise.)

The user experience for word processors has changed little in that time. Faster hardware has not improved the experience. If we limited computers to word processors, there would be no need for a processor more powerful than the Intel 80386. By extension, if you needed a computer for word processing (and nothing else) today's bottom-of-the-line, cheap, minimal computer would be more than enough for you. There is no point in spending on a premium computer (or even a mediocre one) because the minimal computer can do the job adequately.

The same logic applies to spreadsheets. And e-mail. And web browsers. Computers have gotten better faster than these programs and their data have gotten bigger.

The computers I use for my day-to-day and even unusual tasks are old PCs, ranging from five to twenty years in age. All of them are fast enough for what I need to do. An aged Dell Inspiron N5010 runs Windows 10 and lets me use Zoom and Teams to join virtual meetings. I could replace it with a modern laptop, but the experience would be the same! Why should I bother?

A premium computer is needed only for those tasks that perform complex operations on  large sets of data. And this is where Apple fails to provide the tools to justify its powerful (and expensive) hardware.

Apple is focussed on hardware, and it does a terrific job of designing and manufacturing powerful computers. But software is another story. Apple develops applications and then seems to lose interest in them. It built Pages, Numbers, and Keynote, and has made precious few improvements -- other than recompiling for ARM processors, or adding support for things like AirPlay. It hasn't added features.

The same goes for applications such as GarageBand, iTunes, and FaceTime. Even Xcode.

Apple has even let utilities such as grep and sed in MacOS (excuse me, "mac os". Or is it "macos"?) are aging with no updates. The corresponding GNU utilities have been modified and improved in various ways, to the point that developers now recommend the installation of the GNU utilities on Apple computers.

Apple may be waiting for others to build the applications that will take advantage of the latest Mac computers. I'm not sure that many will want to do that.

Building applications to leverage the new Apple processors may seem a no-brainer. But there are a number of disincentives.

First, Apple may build their own version of the application, and compete with the vendor. Independent vendors may be reluctant to enter a market when Apple is a possible competitor.

Second, developing applications to take advantage of the M1 architecture requires a lot of time and effort. The application must be multithreaded -- single-threaded applications cannot fully leverage the multiple cores on the M1. Designing, coding, and testing such an application is a lot of work.

Third, the market is limited. Applications developed to take advantage of the M1 processor are, well, developed for the M1 processor. They won't run on Windows PCs. You can't cross-build to run on Windows PCs, because PC processors are much slower than the Apple processors. The set of potential customers is limited to those who have the upper-end Apple computers. That's a small market (compared to Windows) and the potential revenue may not cover the development costs.

That is an intimidating set of disincentives.

So if Apple isn't building applications to leverage the upper-end Macs, and third-party developers aren't building applications to leverage upper-end Macs, then ... no one is building them! The result being that there are no applications (or very few) to take advantage of the higher-end processors.

I expect sales of the M1-based Macs to be robust, for a short time. But as people realize that their experience has not improved (except, perhaps for "buttery smooth" graphics) they will hesitate for the next round of new Macs (and iPads, and iPhones). As customers weigh the benefits and costs of new hardware, some will decide that their current hardware is good enough. (Just as my 15-year-old PCs are good enough for my simple word processing and spreadsheet needs.) If Apple introduces newer systems with even faster processors, more people will look at the price tags and decide to wait before upgrading.

Apple is set up to learn an important lesson: High-performance hardware is not enough. One needs the software to offer solutions to customers. Apple must work on its software.


Sunday, December 14, 2014

Software doesn't rot - or does it?

Is it possible for software to rot? At first glance, it seems impossible. Software is one of the more intangible constructs of man. It is not exposed to the elements. It does not rust or decompose. Software, in its executable form, is nothing more than digitally-stored bits. Even its source form is digitally-stored bits.

The hardware on which those bits are stored may rot. Magnetic tapes and disks lose their field impressions, and their flexibility, over time. Flash memory used in USB thumb drives lasts a long time, but it too can succumb to physical forces. Even punch cards can burn, become soggy, and get eaten by insects. But software itself, being nothing more than ideas, lasts forever.

Sort of.

Software doesn't rot, rust, decompose, or change. What changes is the items that surround the software, that interact with the software. These change over time. The changes are usually small and gradual. When those item change enough to affect the software, we notice.

What are these items?

User expectations Another intangible changing relative to the intangible software. Users, along with product owners, project managers, support personnel, and others, all have expectations of software. Those expectations are formed not just from the software and its operating environment, but also from other software performing similar (or different) tasks.

The discovery of defects Software is complex, and more software has some number of defects. We attempt to build software free of defects, but the complexity of computer systems (and the business rules behind computer systems) makes that difficult. Often, defects are built in to the system and remain unnoticed for weeks, months, or even years. The discovery of a defect is a bit of "rot".

The business environment New business opportunities, new markets, and new product lines can cause new expectations of software. Changes in law, from new regulations to a different rate for sales tax, can affect software.

Hardware Software tends to outlive hardware. Moving from one computer to a later model can expose defects. Games for the original IBM PC failed (or worked poorly) on the IBM PC with its faster processor. Microsoft Xenix ran on the Intel 80286 but not the 80386 because it used reserved flags in the processor status word. (The 80286 allowed the use of reserved bits; the 80386 enforced the rules.) The install program for Wordstar, having worked for years, would fail on PCs with more than 512K of RAM (this was in the DOS days), a lurking defect exposed by a change in hardware.

The operating system New versions of an operating system can fix defects in the old version. If application software took advantage of that defect (perhaps to improve performance) then the software fails on the later version. Or a new version implies new requirements, such as the requirements for branding your software "Windows-95 ready". (Microsoft imposed several requirements for Windows 95 and many applications had to be adjusted to meet these rules.)

The programming language Microsoft made numerous changes to Visual Basic. Many new versions broke the code from previous versions, causing developers to scramble and devote time to incorporating the changes. The C++ and C# languages have been stable, but even they have had changes.

Paradigm shifts We saw large changes when moving from DOS to Windows. We saw large changes when moving from desktop to web. We're seeing large changes with the move from web to mobile. We want software to operate in the new paradigm, but new paradigms are quite different from old ones and the software must change (often drastically).

The availability of talent Languages and programming technologies rise and fall in popularity. COBOL was once the standard language of business applications; today there are few who know it and fewer who who teach it. C++ is following a similar path, and the rise of NoSQL databases means that SQL will see a decline in available talent. You can stay with the technology, but getting people to work on it will become harder -- and more expensive.

All of these factors surround software (with the exception of lurking defects). The software doesn't change, but these things do, and the software moves "out of alignment" with the needs and expectations of the business. Once out of alignment, we consider the software to be "defective" or "buggy" or "legacy".

Our perception is that software "gets out of alignment" or "rots". It's not quite true -- the software has not changed. The surrounding elements have changed, including our expectations, and we perceive the changes as "software rot".

So, yes, software can rot, in the sense that it does not keep up with our expectations.

Sunday, September 22, 2013

The microcomputers of today

The microcomputer revolution was started with the MITS Altair 8800, the IMSAI 8080, and smaller computers such as the COSMAC ELF. They were machines made for tinkerers, less polished than the Apple II or Radio Shack TRS-80. They included the bare elements needed for a computer, often omitting the case and power supply. (Tinkerers were expected to supply their own cases and power supplies.)

While less polished, they showed that there was a market for microcomputers, and inspired Apple and Radio Shack (and lots of other vendors) to made and sold microcomputers.

Today sees a resurgence of small, "unpolished" computers that are designed for tinkerers. They include the Arduino, the Raspberry Pi, the Beaglebone, and Intel's Minnowboard system. Like the early, pre-Apple microcomputers, these small systems are the bare essentials. (Including omitting the power supply and case.)

And like the earlier microcomputer craze, they are popular.

What's interesting is that there are no major players in this space. There are no big software vendors supplying software for these new microcomputers.

There were no major software vendors in the early microcomputer space. These systems were offered for sale with minimal (or perhaps zero) software. The CP/M operating system was adopted by users and adapted to their systems. CP/M's appeal was that it could be (relatively, for tinkerers) easily modified for specific systems.

The second generation of microcomputers, the Apple II and TRS-80 and their contemporaries, had a number of differences from the first generation. They were polished: they were complete systems with cases, power supplies, and software.

The second generation of microcomputers had a significant market for software. There were lots of vendors, the largest being Digital Research and Microsoft. Microsoft made its fortune by supplying its BASIC interpreter to just about every hardware vendor.

That market did not include the major players from the mainframe or minicomputer markets. Perhaps they thought that the market dynamics was not profitable -- they had been selling software for thousands of dollars (or tens of thousands) and packages in the microcomputer market sold for hundreds (or sometimes tens).

It strikes me that Microsoft is not supplying software to these new microcomputers.

Perhaps they think that the market dynamics are not profitable.

But these are the first generation of new microcomputers, the "unpolished" systems, made for tinkerers. Perhaps Microsoft will make another fortune in the second generation, as they did with the first microcomputer revolution.

Or perhaps another vendor will.

Monday, May 27, 2013

Airships and software

Airships (that is, dirigibles, blimps, and balloons) and software have more in common than one might think. Yet we think of them as two very different things, and we even think about thinking about them in different ways.

Both airships and software must be engineered, and the designs must account for various trade-offs. For airships, one must consider the weight of the materials, the shape, and the size. A larger airship weighs more, yet has more buoyancy and can carry more cargo. Yet a larger airship is affected more by wind and is less maneuverable. Lighter materials tend to be less durable than heavy ones; the trade-off is long-term cost against short-term performance.

The design of software has trade-offs: some designs are cheaper to construct in the short term yet more expensive to maintain in the long term. Some programming languages allow for the better construction of a system -- and others even require it. Comparing C++ to Java, one can see that Java encourages better designs up front, while C++ merely allows for them.

I have observed a number of shops and a number of projects. Most (if not all) have given little thought to the programming language. The typical project picks a programming language based on the current knowledge of the people present on the team -- or possibly the latest "fad" language.

Selecting the programming language is important. More important than current knowledge or the current fad. I admit that learning a new language has a cost. Yet picking a language based on nothing more than team's current knowledge seems a poor way to run a project.

The effect does not stop at languages. We (as an industry) tend to use what we know for many aspects of projects: programming languages, database, front-end design, and even project management. If we (as an organization) have been using the waterfall process, we tend to keep using it. If we (as an organization) have been using an SQL database, we tend to keep using it.

Using "what we know" makes some sense. It is a reasonable course of action -- in some situations. But there comes a time when "the way we have always done it" does not work. There comes a time when new technologies are more cost-effective. Yet sticking with "what we know" means we have no experience with "the new stuff".

If we have no experience with the new technologies, how do we know when they are cost-effective?

We have (as an industry) been using relative measures of technologies. We don't know the absolute cost of the technologies for software. (We do know the absolute cost of technologies for airships -- an airship needs so many yards of material and weighs so much. It carries so much. It has so much resistance and so much wind shear.)

Actually, the problem is worse than relative measures. We have no measures at all (for many projects) and we rely on our gut feelings. A project manager picks the language and database and project process based on his feelings. There are no metrics!

I'm not sure why we treat software projects so differently from other engineering projects. I strongly believe that it cannot continue. The cost of picking the wrong language, the wrong database, the wrong management process will increase over time.

We had better start measuring things. The sooner we do, we can learn the right things to measure.

Thursday, May 3, 2012

In software, bigger is not better

For most things, bigger is better. Hamburgers, cars, bank accounts... generally everything is better when it is bigger. But not for software. Software is not necessarily better when it is bigger.

By "big", I don't mean "popular". I mean "lines of code".

Software runs on a platform. (We used to call it an "operating system", and in the Eldar Days, software ran on hardware.)

Platforms change. Over time, they evolve. Windows XP becomes Windows Vista, and then Windows 7. (And now, Windows 8.) Linux changes from kernel 2.2 to kernel 2.4, and then 2.6. The popular Linux windows manager changes from KDE to Gnome. The Java JVM changes from version 1.5 to 1.6, and then to 1.7. Popular languages change from FORTRAN to BASIC, from BASIC to C, from C to C++, and from C++ to Java (or from Perl to Python to Ruby).

A software solution -- something that meets a need of the business -- must live on these platforms. And since it lives for a significant period of time, it must move from one platform to another as platforms change.

We tend to think that a software solution, once written, tested, and deployed, is permanent. That is, we think the "soft" solution is "firm" or even "hard". But we're wrong. It's still soft -- and often fragile.

For software to endure it must evolve with the platforms. For it to evolve, it must change. For it to change, someone must be capable of modifying the existing code and mutating it into something new. Some changes are small (Java 1.5 to Java 1.6), and some changes are large (Visual Basic 6 to VB.NET).

To change software, one must understand it. One must know the source code and understand the effects of changes to the code.

Which brings us back to the size of the source code.

The larger the source code, the harder it is to understand.

To be fair, there are many factors that affect our ability to understand source code. The language, our knowledge of the problem domain, and the style and readability of the code all affect our ability to make changes. (Also, the presence of automated tests makes for easier maintenance, since it assures us that changes affect only those features that we want to change.)

Yet size matters. Large systems are difficult to port to new platforms, especially platforms that are significantly different than the previous platform. (Say, moving from Windows C++/SDK/MFC to Java/Swing/Struts.)

For an enduring system, we must not assume that the platform will endure. (Microsoft's Windows 8 is a good example.) For enduring systems, we must design applications that can move from one platform to another. A tricky proposition, since the "new" platform will probably not exist when we construct the application.

I expect that a large number of Windows applications will not move to the new platforms of Java, Android, iPhone, or even Windows 8. (The legacy "desktop" mode of Windows 8 does not count.) They will not move because they are too large, too complicated, and too opaque to migrate to newer platforms. Instead, organizations will re-write applications for the new platforms.

But re-writing takes time. It takes more time for larger applications than for smaller applications. And there is always the risk that some feature will be omitted or implemented improperly.

Bigger is not always better. Bigger, when it comes to software, entails risks. The wise product manager will be aware of the risks and plan for them.