Sunday, February 28, 2021

Apple and Google encourage ad-driven apps

We tend to give much thought to the app stores run by Apple and Google for their mobile platforms, but we overlook one aspect of their "tax" collections: Such an arrangement provides incentive for ad-supported apps. I'm concerned about that incentive.

Let's consider the two scenarios:

In both scenarios, I build an app, and release it on the Apple app store and the Google app store. (Or perhaps only one store. It doesn't matter for this discussion.)

In the first scenario, there are no ads in the app. I expect to get revenue from sales of the app, possible subscriptions.

Of course, I will have to pay the Apple and Google taxes for hosting my apps in their stores. If I sell an annual subscription for US$20, Apple and Google will take, say, 15% of that, or $3.

Now let's look at the second possibility: A free app, supported by advertisements.

I build my app and submit it to Apple's store and Google's store, and they host it. Since it is a free app, anyone can download it, and I pay no tax to either Apple or Google.

There are ads, and I get revenue from the advertisements. But none of that revenue is taxed by Apple or Google. I keep all of that revenue.

As I see it, that is a powerful incentive for me to create an ad-driven app. This is something that I have not seen discussed anywhere.

Is that a good thing? It may be. Or it may be a bad thing.

Ad revenue is not as predictable as subscription revenue, which means that my finances are less predictable.

Ad revenue changes the customer of an app. The customer of a paid-for app is the user, the person who downloads and installs the app. With an ad-supported app, the customer is not the user, but the entity paying for advertisements. That's a big difference.

Do advertisers have say in the design of apps? Do they influence the look and feel? Do they request specific features? Or the removal of certain features?

I don't know the answers to these questions, which disturbs me.

I don't see anyone talking about this issue either, and that disturbs me more.

Monday, February 22, 2021

What we want from a Silver Bullet

In 1975, Fred Brooks published "The Mythical Man-Month", an important and well-respected essay on project management. Project management, especially for IT projects, was troubled by delays, discovered complexities, unexpected costs, and budget overruns. He observed that adding people to a late project makes the project later. That is, counter to project management techniques, increasing resources (people) reduces progress. (The reason is that an IT project is complex and requires a high level of understanding and a high level of communication. Adding people to a project adds people who are unfamiliar with the project and therefore ask a lot of questions, question a lot of ideas, and sap time and energy from the veterans of the project.)

Brooks later published a revised version in 1982 with an extra chapter entitled "No Silver Bullets". This chapter introduced the ideas of essential complexity and accidental complexity. The former is part of the task at hand, something that must be dealt with no matter which tools or techniques we use. The latter is not part of the task, but instead generated by of techniques, our tools, and the organization of our data. But most people miss (or forget) that part of the essay.

Instead, people latched on the to notion of "silver bullets". The new chapter used the metaphor of werewolves (the difficulties of managing projects) and silver bullets (the only thing that could slay a werewolf, and therefore a magical solution for project management).

While Brooks argued that there were no silver bullets, the term stuck, and so did the metaphor. We in the management of IT projects have been looking for silver bullets, tools or techniques that will tame those delays, complexities, and unexpected costs.

The metaphor is picturesque and easy to understand (two good qualities in a metaphor) but is, alas, inaccurate (not such a good quality in a metaphor).

I have a different view of projects and "silver bullets" (if I may keep the metaphor for a short time).

We don't want silver bullets, at least not in the general understanding. We think of silver bullets as better tools or better techniques. We think in terms of processors and operating systems and programming languages, of databases and web servers and cloud systems.

But better processors and better operating systems and better programming languages are not silver bullets. In the forty years since Brooks wrote "No Silver Bullets", we have seen faster and more powerful processors, better languages, new approaches to data storage, the web, and cloud computing. Yet our projects still suffer from delays, unexpected complexity, and budget overruns.

But my point is not to say that Fred Brooks got it wrong (he didn't, in my opinion) or that his readers focus on the wrong point from his essay (they do, also in my opinion), or that we are wasting time in looking for a silver bullet to kill the problems of project management (we are, foolishly).

No, my point is something different. My point is that we don't really want better processors and programming languages -- at least not in the normal sense.

I think we want something else. I think we want something completely different.

What we want is not a thing, not a positive. We don't want a "better X" just to have a better X. Instead, we want to eliminate something. We want a negation.

We want to eliminate regrets.

We want tools that let us make decisions and also guarantee that we will not regret those decisions.

The challenges of IT projects are almost entirely regrets in choices that we previously made. We regret the time it takes us to write a system in a certain programming language. Or we regret that the programming framework we selected does not allow for certain operations. Or, after building a minimal version of our system we are disappointed with the performance.

We regret the defects that were written into the system. We regret the design of our database. We regret outsourcing part of the project to a team in a different country, in a different culture and a different time zone.

The silver bullet that we want is something that will eliminate those regrets.

We look to no-code programming tools, thinking that we no code we will have, well, no code to debug or modify. (Technically that is true, but the configuration files for the no-code platform have to be revised and adjusted and one can consider that programming in an obscure, non-Turing-complete language.)

We look to NoSQL databases to avoid regrets of database design and complex SQL queries. I'm sure that many people, in 1999, regretted the decision that they (or earlier members of their teams) had made about storing year values in 2-digit form.

But in system architecture, in database design, in programming, there is no avoiding decisions. Every activity involves decisions, and recording those decisions in a form that the computer can use.

Instead of a Silver Bullet, people are looking for the Holy Grail, an object that can erase the consequences of decisions.

I will point out that the Agile Development method has helped reduce regrets. Not by preventing them, nor by erasing decisions, but instead by letting us quickly see the consequences of our decisions, when a change to those decisions is easy to implement.

Tools such as programming languages and version control systems and data storage systems help us build systems. The tools that let us be most effective are the ones that let us see the results of our decisions quickly. 

Monday, February 15, 2021

Linked lists, dictionaries, and AI

When I was learning the craft of programming, I spent a lot of time learning about data structures (linked lists, trees, and other things). How to create them. How to add a node. How to remove a node. How to find a node. There was a whole class in college about data structures.

At the time, everyone learning computer science learned those data structures. Those data structures were the tools to use when designing and building programs.

Yet now in the 21st century, we don't use them. (At least not directly.)

We use lists and dictionaries. Different languages use different names. C++ calls them 'vectors' and 'maps'. Perl calls them 'lists' and 'hashes'. Ruby calls them ... you get the idea. The names are not important.

What is important is that these data structures are the ones we use. Every modern language implements them. And I must admit that lists and dictionaries are much easier to use than linked lists and balanced trees.

Lists and dictionaries did not come for free, though. They cost more in terms of both execution time and memory. Yet we, as an industry, decided that the cost of lists and dictionaries was worth the benefit (which was less time and effort to write programs).

What does this have to do with AI?

It strikes me that AI is in a phase equivalent to the 'linked list' phase of programming.

Just as we were convinced, some years ago, that linked lists and trees were the key to programming, we are (today) convinced that our current techniques are the key to AI.

It would not surprise me to find that, in five or ten years, we are using completely different tools for AI.

I don't know what those new tools will be. (If I did, I would be making a small fortune implementing and selling them.)

But just as linked lists and trees morphed into lists and dictionaries with the aid of faster processors and more memory, I think AI tools of today will morph into the tools of tomorrow with better hardware. That better hardware might be faster processors and more memory, or it might be advanced network connections and coordination between processes on different computers, or it might even be better data structures. (The last, technically, is of course not hardware.)

Which doesn't mean we should stop work on AI. It doesn't mean that we should all just sit around and wait for better tools for AI to appear. (If no one is working on AI, then no one will have ideas for better tools.)

We should continue to work on AI. But just as we replaced to code that used older data structures with code that used newer data structures, we should expect to replace early AI techniques with later AI techniques. In other words, the things that we build in AI will be temporary. We can expect to replace them with better tools, better models -- and perhaps not that far off in the future!


Wednesday, February 10, 2021

The equipment you provide to developers

If you're a manager of a software development project, you may want to consider carefully the equipment you provide your developers.

You probably do think about the equipment you provide to your team. Certainly the cost and capabilities, and probably reliability and conformance to your organization's IT standards. Yet there is one aspect of the equipment that you may have overlooked.

That aspect is the minimum hardware required to use your product.

In today's world of cloud-based applications, one rarely thinks about minimum requirements for hardware. Such requirements were more noticeable in the age of Windows desktop applications, and even in PC-DOS applications. Back then, when software was purchased in a box and installed (from CD-ROM or perhaps even floppy disk), minimum requirements were on everyone's mind. It was a necessary factor to review before purchasing software.

Today, personal computers are mostly uniform: 64-bit Intel processor, 4 GB of RAM (possibly more), 500 GB of disk (probably more), and access to the internet. The only variable is screen size. Desktops probably have a 22-inch (or larger) display, and laptops may be anywhere from 11 inches to 17 inches.

The other variable is operating system: Windows (most likely), mac OS (less likely), or Linux (a very small probability). But operating systems are easy to identify, and you probably know which ones your shop uses.

Display size is the troublemaker, the one variable that can cause software to work well or to be a source of irritation.

But what does this have to do with the equipment you provide to your developers?

Back in the 1980s, we recognized that developers designed and wrote systems that worked on their computers. That is, the software was built for the capabilities of their computers -- and not smaller or less capable systems. If developers were given workstations with lots of memory and large displays, the software would require lots of memory and a large display. Call this effect the "law of minimum requirements".

The law is still with us.

Last week, someone (i forget who) complained that the latest version of mac OS worked well on large displays but seemed "off" for small screens such as an 11-inch Macbook Air. (Apple had apparently tweaked the spacing between graphical elements in its UI.)

My guess is that the law of minimum requirements is acting here. My guess is that Apple provides its developers with large, capable machines. Possibly Mac Pro workstations with multiple large displays, perhaps 16-inch MacBook Pro systems with large external display.

It is quite possible that the developers made changes to the UI and reviewed them on their workstations, and didn't check the experience on smaller equipment. Perhaps no-one did. Or, someone may have noticed the issue late in the development cycle, too late to make changes and meet the delivery date. The actual process doesn't really matter. The result is a new version of mac OS that works poorly for those with small screens.

Apple isn't alone in this.

A web app that I use has a similar fault. Once I log in, the page for data entry is large -- so large that I must expand my browser to almost the entire size of my display to see everything and to enter data. (The web page is said to be not "responsive to screen size".)

It's not that I keep my browser set to a very small window. I visit many different web sites (newspapers, banks, government payment portals, social media sites, and others) and they all fit in my browser. It is just this one page that demands many more pixels.

Here again, I see the law of minimum requirements in action. The developers for this one particular web site used large displays, and for them the web site was acceptable. They did not consider that other users might have smaller displays, or smaller browser windows.

I do have to point out that building such a web site requires effort. HTML is designed to allow for different screen sizes. Browsers are designed to adjust the sizes of screen elements. A screen that provides a "fixed format" experience requires extra code to override the behaviors that one gets "for free" with HTML. But that's a different issue.

The law of minimum requirements is a thing. The equipment we provide to developers shapes the resulting system, often in ways that are not obvious (until we look back with hindsight). Be aware of the minimum equipment you expect your end-users to use, and test for those configurations early in the development cycle. Doing so can reduce irritation and improve the experience for the customer.

Wednesday, February 3, 2021

The return of timesharing

Timesharing (the style of computing from the 1970s) is making a comeback. Or rather, some of the concepts of timesharing are making a comeback. But first, what was timesharing and how was it different from computing at the time.

Mainframe computers in the 1960s were not the sophisticated devices of today, but simpler computers with equivalent power of an original IBM PC (5 MHz processor, 128KB RAM). They were, of course, much larger and much more expensive. Early mainframes ran one program at a time (much like PC-DOS). When one program finished, the next could be started. They had one "user" which was the system operator.

Timesharing was a big change in computing. Computers had become powerful enough to support multiple interactive users at the same time. It worked because interactive uses spent most of their time thinking and not typing, so a user's "job" is mostly waiting for input. A timesharing system held multiple jobs in memory and cycled among each of those jobs. Timesharing allowed remote access ("remote" in this case meaning "outside of the computer room") by terminals, which meant that individuals in different parts of an organization could "use the computer".

Timesharing raised the importance of time. Specifically, timesharing raised the importance of the time a program needed to run (the "CPU time") and the time a user was connected. The increase in computing power allowed the operating system to record these values for each session. Tracking those values was important, because it let the organization charge users and departments for the use of the computer. The computer was no longer a monolithic entity with a single (large) price tag, but a resource that could be expensed to different parts of the organization.

Now let's consider cloud computing.

It turns out that the cloud is not infinite. Nor is it free. Cloud computing platforms record charges for users (either individuals or organizations). Platforms charge for computing time, for data storage, and for many other services. Not every platform charges for the same things, with some offering a few services for free. 

The bottom line is the same: with cloud computing, an organization has the ability to "charge back" expenses to individual departments, something that was not so easy in the PC era or the web services era.

Or, to put it another way, we are undergoing a change in billing (and information about expenses) that is not new, but has not been seen in half a century. How did the introduction of timesharing (and its expense information) affect organizations? Will we see the same affects again?

I think we will.

Timesharing made interactive computing possible, and it made the expense of that computing visible to users. It let users decide how much computing that wanted to use, and users had discretion to use more or less computing resources.

Cloud computing provides similar information to users. (Or at least the organizations paying for the cloud services; I expect those organizations will "charge back" those expenses to users.) Users will be able to see those charges and decide how much computing resources they want to use.

As organizations move their systems from web to cloud (and from desktop to cloud), expect to see expense information allocated to the users of those systems. Internal users, and also (possibly) external users (partners and customers).

Timesharing made expense information available at a granular level. Cloud computing does the same.