Sunday, February 28, 2021

Apple and Google encourage ad-driven apps

We tend to give much thought to the app stores run by Apple and Google for their mobile platforms, but we overlook one aspect of their "tax" collections: Such an arrangement provides incentive for ad-supported apps. I'm concerned about that incentive.

Let's consider the two scenarios:

In both scenarios, I build an app, and release it on the Apple app store and the Google app store. (Or perhaps only one store. It doesn't matter for this discussion.)

In the first scenario, there are no ads in the app. I expect to get revenue from sales of the app, possible subscriptions.

Of course, I will have to pay the Apple and Google taxes for hosting my apps in their stores. If I sell an annual subscription for US$20, Apple and Google will take, say, 15% of that, or $3.

Now let's look at the second possibility: A free app, supported by advertisements.

I build my app and submit it to Apple's store and Google's store, and they host it. Since it is a free app, anyone can download it, and I pay no tax to either Apple or Google.

There are ads, and I get revenue from the advertisements. But none of that revenue is taxed by Apple or Google. I keep all of that revenue.

As I see it, that is a powerful incentive for me to create an ad-driven app. This is something that I have not seen discussed anywhere.

Is that a good thing? It may be. Or it may be a bad thing.

Ad revenue is not as predictable as subscription revenue, which means that my finances are less predictable.

Ad revenue changes the customer of an app. The customer of a paid-for app is the user, the person who downloads and installs the app. With an ad-supported app, the customer is not the user, but the entity paying for advertisements. That's a big difference.

Do advertisers have say in the design of apps? Do they influence the look and feel? Do they request specific features? Or the removal of certain features?

I don't know the answers to these questions, which disturbs me.

I don't see anyone talking about this issue either, and that disturbs me more.

Monday, February 22, 2021

What we want from a Silver Bullet

In 1975, Fred Brooks published "The Mythical Man-Month", an important and well-respected essay on project management. Project management, especially for IT projects, was troubled by delays, discovered complexities, unexpected costs, and budget overruns. He observed that adding people to a late project makes the project later. That is, counter to project management techniques, increasing resources (people) reduces progress. (The reason is that an IT project is complex and requires a high level of understanding and a high level of communication. Adding people to a project adds people who are unfamiliar with the project and therefore ask a lot of questions, question a lot of ideas, and sap time and energy from the veterans of the project.)

Brooks later published a revised version in 1982 with an extra chapter entitled "No Silver Bullets". This chapter introduced the ideas of essential complexity and accidental complexity. The former is part of the task at hand, something that must be dealt with no matter which tools or techniques we use. The latter is not part of the task, but instead generated by of techniques, our tools, and the organization of our data. But most people miss (or forget) that part of the essay.

Instead, people latched on the to notion of "silver bullets". The new chapter used the metaphor of werewolves (the difficulties of managing projects) and silver bullets (the only thing that could slay a werewolf, and therefore a magical solution for project management).

While Brooks argued that there were no silver bullets, the term stuck, and so did the metaphor. We in the management of IT projects have been looking for silver bullets, tools or techniques that will tame those delays, complexities, and unexpected costs.

The metaphor is picturesque and easy to understand (two good qualities in a metaphor) but is, alas, inaccurate (not such a good quality in a metaphor).

I have a different view of projects and "silver bullets" (if I may keep the metaphor for a short time).

We don't want silver bullets, at least not in the general understanding. We think of silver bullets as better tools or better techniques. We think in terms of processors and operating systems and programming languages, of databases and web servers and cloud systems.

But better processors and better operating systems and better programming languages are not silver bullets. In the forty years since Brooks wrote "No Silver Bullets", we have seen faster and more powerful processors, better languages, new approaches to data storage, the web, and cloud computing. Yet our projects still suffer from delays, unexpected complexity, and budget overruns.

But my point is not to say that Fred Brooks got it wrong (he didn't, in my opinion) or that his readers focus on the wrong point from his essay (they do, also in my opinion), or that we are wasting time in looking for a silver bullet to kill the problems of project management (we are, foolishly).

No, my point is something different. My point is that we don't really want better processors and programming languages -- at least not in the normal sense.

I think we want something else. I think we want something completely different.

What we want is not a thing, not a positive. We don't want a "better X" just to have a better X. Instead, we want to eliminate something. We want a negation.

We want to eliminate regrets.

We want tools that let us make decisions and also guarantee that we will not regret those decisions.

The challenges of IT projects are almost entirely regrets in choices that we previously made. We regret the time it takes us to write a system in a certain programming language. Or we regret that the programming framework we selected does not allow for certain operations. Or, after building a minimal version of our system we are disappointed with the performance.

We regret the defects that were written into the system. We regret the design of our database. We regret outsourcing part of the project to a team in a different country, in a different culture and a different time zone.

The silver bullet that we want is something that will eliminate those regrets.

We look to no-code programming tools, thinking that we no code we will have, well, no code to debug or modify. (Technically that is true, but the configuration files for the no-code platform have to be revised and adjusted and one can consider that programming in an obscure, non-Turing-complete language.)

We look to NoSQL databases to avoid regrets of database design and complex SQL queries. I'm sure that many people, in 1999, regretted the decision that they (or earlier members of their teams) had made about storing year values in 2-digit form.

But in system architecture, in database design, in programming, there is no avoiding decisions. Every activity involves decisions, and recording those decisions in a form that the computer can use.

Instead of a Silver Bullet, people are looking for the Holy Grail, an object that can erase the consequences of decisions.

I will point out that the Agile Development method has helped reduce regrets. Not by preventing them, nor by erasing decisions, but instead by letting us quickly see the consequences of our decisions, when a change to those decisions is easy to implement.

Tools such as programming languages and version control systems and data storage systems help us build systems. The tools that let us be most effective are the ones that let us see the results of our decisions quickly. 

Monday, February 15, 2021

Linked lists, dictionaries, and AI

When I was learning the craft of programming, I spent a lot of time learning about data structures (linked lists, trees, and other things). How to create them. How to add a node. How to remove a node. How to find a node. There was a whole class in college about data structures.

At the time, everyone learning computer science learned those data structures. Those data structures were the tools to use when designing and building programs.

Yet now in the 21st century, we don't use them. (At least not directly.)

We use lists and dictionaries. Different languages use different names. C++ calls them 'vectors' and 'maps'. Perl calls them 'lists' and 'hashes'. Ruby calls them ... you get the idea. The names are not important.

What is important is that these data structures are the ones we use. Every modern language implements them. And I must admit that lists and dictionaries are much easier to use than linked lists and balanced trees.

Lists and dictionaries did not come for free, though. They cost more in terms of both execution time and memory. Yet we, as an industry, decided that the cost of lists and dictionaries was worth the benefit (which was less time and effort to write programs).

What does this have to do with AI?

It strikes me that AI is in a phase equivalent to the 'linked list' phase of programming.

Just as we were convinced, some years ago, that linked lists and trees were the key to programming, we are (today) convinced that our current techniques are the key to AI.

It would not surprise me to find that, in five or ten years, we are using completely different tools for AI.

I don't know what those new tools will be. (If I did, I would be making a small fortune implementing and selling them.)

But just as linked lists and trees morphed into lists and dictionaries with the aid of faster processors and more memory, I think AI tools of today will morph into the tools of tomorrow with better hardware. That better hardware might be faster processors and more memory, or it might be advanced network connections and coordination between processes on different computers, or it might even be better data structures. (The last, technically, is of course not hardware.)

Which doesn't mean we should stop work on AI. It doesn't mean that we should all just sit around and wait for better tools for AI to appear. (If no one is working on AI, then no one will have ideas for better tools.)

We should continue to work on AI. But just as we replaced to code that used older data structures with code that used newer data structures, we should expect to replace early AI techniques with later AI techniques. In other words, the things that we build in AI will be temporary. We can expect to replace them with better tools, better models -- and perhaps not that far off in the future!


Wednesday, February 10, 2021

The equipment you provide to developers

If you're a manager of a software development project, you may want to consider carefully the equipment you provide your developers.

You probably do think about the equipment you provide to your team. Certainly the cost and capabilities, and probably reliability and conformance to your organization's IT standards. Yet there is one aspect of the equipment that you may have overlooked.

That aspect is the minimum hardware required to use your product.

In today's world of cloud-based applications, one rarely thinks about minimum requirements for hardware. Such requirements were more noticeable in the age of Windows desktop applications, and even in PC-DOS applications. Back then, when software was purchased in a box and installed (from CD-ROM or perhaps even floppy disk), minimum requirements were on everyone's mind. It was a necessary factor to review before purchasing software.

Today, personal computers are mostly uniform: 64-bit Intel processor, 4 GB of RAM (possibly more), 500 GB of disk (probably more), and access to the internet. The only variable is screen size. Desktops probably have a 22-inch (or larger) display, and laptops may be anywhere from 11 inches to 17 inches.

The other variable is operating system: Windows (most likely), mac OS (less likely), or Linux (a very small probability). But operating systems are easy to identify, and you probably know which ones your shop uses.

Display size is the troublemaker, the one variable that can cause software to work well or to be a source of irritation.

But what does this have to do with the equipment you provide to your developers?

Back in the 1980s, we recognized that developers designed and wrote systems that worked on their computers. That is, the software was built for the capabilities of their computers -- and not smaller or less capable systems. If developers were given workstations with lots of memory and large displays, the software would require lots of memory and a large display. Call this effect the "law of minimum requirements".

The law is still with us.

Last week, someone (i forget who) complained that the latest version of mac OS worked well on large displays but seemed "off" for small screens such as an 11-inch Macbook Air. (Apple had apparently tweaked the spacing between graphical elements in its UI.)

My guess is that the law of minimum requirements is acting here. My guess is that Apple provides its developers with large, capable machines. Possibly Mac Pro workstations with multiple large displays, perhaps 16-inch MacBook Pro systems with large external display.

It is quite possible that the developers made changes to the UI and reviewed them on their workstations, and didn't check the experience on smaller equipment. Perhaps no-one did. Or, someone may have noticed the issue late in the development cycle, too late to make changes and meet the delivery date. The actual process doesn't really matter. The result is a new version of mac OS that works poorly for those with small screens.

Apple isn't alone in this.

A web app that I use has a similar fault. Once I log in, the page for data entry is large -- so large that I must expand my browser to almost the entire size of my display to see everything and to enter data. (The web page is said to be not "responsive to screen size".)

It's not that I keep my browser set to a very small window. I visit many different web sites (newspapers, banks, government payment portals, social media sites, and others) and they all fit in my browser. It is just this one page that demands many more pixels.

Here again, I see the law of minimum requirements in action. The developers for this one particular web site used large displays, and for them the web site was acceptable. They did not consider that other users might have smaller displays, or smaller browser windows.

I do have to point out that building such a web site requires effort. HTML is designed to allow for different screen sizes. Browsers are designed to adjust the sizes of screen elements. A screen that provides a "fixed format" experience requires extra code to override the behaviors that one gets "for free" with HTML. But that's a different issue.

The law of minimum requirements is a thing. The equipment we provide to developers shapes the resulting system, often in ways that are not obvious (until we look back with hindsight). Be aware of the minimum equipment you expect your end-users to use, and test for those configurations early in the development cycle. Doing so can reduce irritation and improve the experience for the customer.

Wednesday, February 3, 2021

The return of timesharing

Timesharing (the style of computing from the 1970s) is making a comeback. Or rather, some of the concepts of timesharing are making a comeback. But first, what was timesharing and how was it different from computing at the time.

Mainframe computers in the 1960s were not the sophisticated devices of today, but simpler computers with equivalent power of an original IBM PC (5 MHz processor, 128KB RAM). They were, of course, much larger and much more expensive. Early mainframes ran one program at a time (much like PC-DOS). When one program finished, the next could be started. They had one "user" which was the system operator.

Timesharing was a big change in computing. Computers had become powerful enough to support multiple interactive users at the same time. It worked because interactive uses spent most of their time thinking and not typing, so a user's "job" is mostly waiting for input. A timesharing system held multiple jobs in memory and cycled among each of those jobs. Timesharing allowed remote access ("remote" in this case meaning "outside of the computer room") by terminals, which meant that individuals in different parts of an organization could "use the computer".

Timesharing raised the importance of time. Specifically, timesharing raised the importance of the time a program needed to run (the "CPU time") and the time a user was connected. The increase in computing power allowed the operating system to record these values for each session. Tracking those values was important, because it let the organization charge users and departments for the use of the computer. The computer was no longer a monolithic entity with a single (large) price tag, but a resource that could be expensed to different parts of the organization.

Now let's consider cloud computing.

It turns out that the cloud is not infinite. Nor is it free. Cloud computing platforms record charges for users (either individuals or organizations). Platforms charge for computing time, for data storage, and for many other services. Not every platform charges for the same things, with some offering a few services for free. 

The bottom line is the same: with cloud computing, an organization has the ability to "charge back" expenses to individual departments, something that was not so easy in the PC era or the web services era.

Or, to put it another way, we are undergoing a change in billing (and information about expenses) that is not new, but has not been seen in half a century. How did the introduction of timesharing (and its expense information) affect organizations? Will we see the same affects again?

I think we will.

Timesharing made interactive computing possible, and it made the expense of that computing visible to users. It let users decide how much computing that wanted to use, and users had discretion to use more or less computing resources.

Cloud computing provides similar information to users. (Or at least the organizations paying for the cloud services; I expect those organizations will "charge back" those expenses to users.) Users will be able to see those charges and decide how much computing resources they want to use.

As organizations move their systems from web to cloud (and from desktop to cloud), expect to see expense information allocated to the users of those systems. Internal users, and also (possibly) external users (partners and customers).

Timesharing made expense information available at a granular level. Cloud computing does the same.


Tuesday, January 26, 2021

Side projects for corporations

We're familiar with the notion of a "side project" -- a project that is not related to our day job, but one that we work on anyway. One can have a number of reasons for the side project, including learning a new technology, building a solution for the household, or just plain fun. (There's also the notion of a "side gig", which is a second job and might also be called "moonlighting", but that's a different idea.)

People have side projects. Do corporations? I think that they do.

Corporations, depending on the industry, can have lots of projects, a few projects, or almost none. IT consulting firms run on projects -- its what they do for their customers. Doctor offices tend to have very few IT projects. Regardless of industry, a company will probably use technology of some kind and will have some projects, at least from time to time.

For a corporation, a side project is one that does not directly affect the services to customers. For an IT consulting company, a side project may be an upgrade to the time-recording or invoicing system. For a doctor office, it might be an upgrade to desktop PCs or a replacement for the appointment system. (Appointments for doctor offices are quite important, and tracking them and the people who are involved is an important task.)

How does a company handle side projects? How do they fund the project? How do they staff it? How do they measure performance?

It seems to me that there are two approaches to a side project. One is to treat it like any other project. For an IT consulting company, that means defining the objectives, specifying requirements, allocating a budget, assigning people, and executing the project.

The other approach is to ignore the project as much as possible. Define objectives, and assign a manager to complete the project, but allocate a minimal budget and don't assign people. Let the manager scrimp equipment and people from other projects. In other words, treat the project as a necessary evil.

The latter approach is used all too often, most likely because it reduces costs. Or appears to reduce costs. When the project manager "borrows" people from other projects, those work hours are quite often charged back to the original projects. The project itself is completed as quickly as possible, often resulting in a solution that is poorly designed and poorly implemented. Normal project management procedures (such as testing) can be truncated or eliminated. For an application that will be used internally, that poor design and poor implementation will be inflicted upon people within the company, perhaps a large number of people.

The worst aspect of treating a project as a necessary evil, with short budgets and shortcuts in processes, is that it sends the message that such projects are allowed (and possibly encouraged). It sends the message that it is okay to cut corners and deliver a project with low quality. The risk is that the shortcuts will be applied to not only internal projects (side projects) but also the "real" projects, resulting in poor quality for customers. (Performance will vary, of course, from manager to manager.)

The former approach is, in my mind, a healthier one. It recognizes the costs of a project and justifies them. It provides an honest analysis of benefits for the investment.

A side project can be an opportunity for employees to gain experience in new technology or new roles in the organization (such as project manager). As an internal project, the risk of failure is lower than the risk of a failure on a project for a customer.

Also, side projects can be showpieces to prospective customers. One usually cannot show details of a project for one customer to another customer, but the case with internal projects is different. For an internal project, the documents and results can be clearly shared with others.

A portfolio of successful projects, with objectives, initial estimates, budgets, schedules, earned-work metrics, test plans, test results, and implementations can be a powerful sales tool.

There is, of course, the possibility that an internal project will not be successful. It may overrun the schedule or the budget (or both). It may deliver the wrong solution. It may run into unforeseen technical difficulties such that it cannot be completed. Such a project is a learning experience, but perhaps not a project to put in a portfolio. (Unless it occurred early in the company history, and you have a lot of successful projects after the failed project. That could show improvement in your management.)

For companies that are not in the IT consulting industry, a portfolio of projects is not necessary. Those companies deliver products or services different from IT. Yet side projects are still opportunities for employees to develop new skills and work with different parts of the company.

We can consider side projects necessary evils or opportunities. I think the latter offers more benefits.

Monday, January 11, 2021

Predictions for 2021

It's January, the start of anew year. Let's have some predictions!

We can start with easy predictions:

Remote work: Telework and online meetings will continue to be popular. Covid-19 is still with us; work-from-home will remain a necessity for many people.

There will be some push-back against remote work; some from managers and some from employees. In my view, discomfort with remote work is a symptom that the right performance measurements are not in place.

If easy, simple, and clear performance measurements are available, then it is easy to see that an employee is performing -- or not performing -- whether they work in the office, at home, or at a different location. But in our imperfect world, there are many jobs that do not have easy, simple, and clear performance measurements.

Companies have an opportunity to review performance measurements and improve them to match telework, Will they? A minority, I think, will.

ARM processors: The new processors from Apple will be hailed as a significant advance in performance, and Microsoft will be pressured to increase support for ARM processors. Most of the interest in ARM processors is driven by raw performance.

Improving the performance of desktop computers may be a mistake -- except for Apple. Computing -- except for Apple -- long ago moved from the desktop to web servers, and is not moving to cloud servers. The desktop is not the center of computing; it is more akin to a terminal in a timeshare system. Improving the desktop performance does not help the central processor.

Apple is an exception. It continues to use the model of desktop applications, where processing occurs on the local PC (or the tablet, or the watch). For Apple, improving the processing capabilities of the local PC makes sense.

Programming languages: The general trends of language popularity will continue; that is, most languages will decline in popularity. Expect declines in especially Perl and Java. Perl because Python is gaining at the former's expense, and Java because Oracle is doing little to help the development communities. (Both commercial and open source communities.)

I expect little in the way of improvements to Objective-C compilers and libraries. Apple will encourage people to move to Swift, and not developing Objective-C is one part of that strategy.

I expect to see a large number of new programming languages, made by individuals and research groups, and perhaps even companies, but none of them will become popular. We have our hands full with the current set of languages; new languages will be curiosities. An exception might be a programming language that is designed for cloud computing. Such a language would be designed for smaller programs, most likely not object-oriented, easy to learn, and capable of receiving and sending web requests.

Corporate migrations: We've already seen companies leave California for Texas. Will other companies follow? I tend to think that these migrations were made possible by the work-from-home forced upon us by Covid-19. Regardless, companies have decided that they can leave Silicon Valley.

Companies may leave the condensed area of Silicon Valley to form a new condensed area somewhere else -- or they may disperse across the country. (And some may even leave the United States.) The result will be a shift in employment, as some companies will ask employees to follow to new headquarters and others will let employees work from any location. Some companies may move only the executives, others may ask the entire roster of employees to move, and some may move select departments and keep others in their current locations.

I expect that there will be no new "Silicon Valley", no one place that attracts tech companies (much less all of the companies). Individual companies will move to locations that make sense for each company. That could be Austin, TX; Miami, FL; Phoenix, AZ; or any of a number of smaller cities.

Legislation will have major effects: An obvious area of legislation will be the DMCA (for copyright issues) and CDA section 230 (which shields platforms from liability of content posted by users). Other issues will be digital taxes (or more precisely, taxes on the providers of digital services), anti-trust, and privacy concerns. Any of those could become a hot issue in 2021.

Politics will remain significant: Already we have seen Twitter, Facebook, and Amazon take action against users who violate their terms of service. These actions have seen reactions and pushback from other users, loyal to the original offenders. The social and political environment will make it difficult for companies to remain neutral. Companies which move their offices will be judged on the move, both in the origin (where they move from) and the destination (where they move to).

Summary: Changes for 2021 will be driven more by non-tech issues than tech issues. New versions of Java or C# programming languages will have little effect on the market. New cloud services and new versions of operating systems will have little effect. Politics, legislation, and possibly high-profile lawsuits will drive the changes.