Thursday, June 25, 2020

A new old web

One idea I have been pondering is a retro version of the world wide web. This "new old web" would be a world-wide web as it originally existed: a collection of static web pages with links to other web pages, either on the same site or on other sites.

The defining aspects of this new old web are what it doesn't have: HTTPS, certificates, cookies, and JavaScript. It would be a simpler version, and an unsecure version, of today's web.

Why do this? Why re-create the old web, one that does not have HTTPS and therefore security?

In a word, permanence.

The current web all but requires HTTPS, which in turn requires security certificates, which in turn expire and therefore require replacements. All of that means that a web site needs maintenance, every 12 months or whenever the certificates expire.

What I am considering is a web that lets one set up a web server and leave it running with no maintenance. Perhaps one could apply updates to the operating system and occasionally blow dust out of the box, but that's it. No annual dance for certificates. Maybe one does not even update the operating system.

Why do this? Mostly as a thought experiment, to see where it leads us. So let's start with these conditions and see where we go. 

This new old web could have web sites that exist for years, or even decades.

Of course, without certificates, one cannot support HTTPS.

Without HTTPS, one cannot transact business. No banking, no credit card statements, and no purchases.

Without HTTPS, one cannot securely log in to a web site, so no personalized web sites. No Facebook, no Twitter, no e-mail.

Such a web would need a new web browser. Current web browsers dislike HTTP connections, and warn that the page is insecure. (We may be a few years away from requiring HTTPS for all links and URLs.) So with current web browsers deprecating HTTP, perhaps we need a new HTTP-only browser.

A new HTTP-only browser would request and load pages over HTTP connections. It would not request an HTTPS connection. A link to HTTPS would be considered an ill-formed link and not valid.

If I'm building a new browser, I can make other changes.

I banish cookies. This prevents third-party cookies and tracking. Overall, this is an enhancement to privacy.

Scripts are also forbidden. No JavaScript or any scripts of any type. The HTML <script> tag must render as text. This eliminates the threat of cross-site scripting.

Web pages may contain text, HTML, and CSS.

One could use PHP, JSP, ASP or ASPX on the server side to render web pages, although the possible uses may be limited. (Remember, no logins and no user IDs.)

It seems that such a web would be mostly static web pages, serving documents and images. I suppose one could serve videos. One could, of course, link from one page to the next.

My idea is not to replace the existing web. The existing web, while it started as this earlier, static web, has evolved into a different thing, one that is quite useful.

My idea is to create a second web, one that exists in parallel to the current web. I would like to try it, just to see what people would do with it. Instead of a web with private areas for private data (e-mail, Facebook, online banking, music that has been purchased, etc.) we would have a web where everything is available to everyone.

How would we act in such a space? What would we create?

That is what I have been pondering.

Thursday, June 18, 2020

Apple is stuck in its own revenue trap

The "Hey" e-mail app got a bit of attention this week. Made by BaseCamp, and published on iOS (and therefore subject to the terms and conditions of the iOS App Store), Apple rejected version 1.0.1, claiming that the app did not meet its guidelines. Two aspects made this rejection notable: version 1.0 was approved (and version 1.0.1 is minimally different), and Apple decided to "clarify" its terms and conditions after many people complained that the app was, in fact, in compliance with the published terms and conditions. (Apple's clarification was that certain rules apply to "business apps" and different rules apply to "consumer apps", and that the "Hey" e-mail app was out of compliance because it did not provide Apple with 30% of its revenue.)

Lost in the noise about the "Apple tax" and clarity of terms and conditions and consistency of rulings on said terms and conditions is an aspect of Apple that we may want to ponder.

Apple justifies its 30% cut of in-app revenue by offering the platform and services.

iOS is a capable platform. It does a lot. One can argue that the 30% rate is too high (or too low). One can argue that Apple holds a monopoly on apps for iOS.

I want to think about something else: Apple's model of computing, which allows it to justify the "tax".

Those services assume a specific model of computing. Not cloud computing, not web services, not distributed computing. A model of computing that was dominant in the 1970s (when Apple was founded) and the early 1980s. The model of local computing, of personal computing.

In this model, apps run on phones and applications run on laptops and desktops (and the Mac Pro tower). Apps and applications communicate with the user through the user interface. Everything happens on the local device. For Apple iPhones and MacBooks, computing occurs on those devices.

Compare that model to the model used by Google's Chromebook. In that model, the Chromebook is a simple device that sends requests to servers (the cloud) and simply presents the results. (IT professionals of a certain age will recognize this model as a variant of the 1960s timesharing, or IBM's terminals to mainframes. Both used simple terminals to invoke actions on the remote system.)

Back to Apple.

Apple must keep this model of local computing, to justify their take of revenue. They cannot move to a Chromebook model. If they did, they would lose their reason for the 30% tax. Developers are angry enough now at Apple, and while some decline to write for the iOS platform, many others "pay the tax" albeit grudgingly.

But what happens when computing moves to the cloud? A cloud-based app does little on the phone. The computing is on the servers. The app, at best, presents a UI and sends requests to the servers. Is the UI and an HTTP stack enough to justify the 30% "tax"? It's my opinion that such a simple system does not, and therefore Apple must keep apps in the older model of local computing, in which an app uses many services.

Apple has built a nice operating system and platform with its iOS, and it has built a trap with its App Store and 30% revenue cut. Apple is loath to give up that revenue. To keep that revenue, it needs to provide the services that it proudly hawks.

So, as I see it, Apple is stuck. Stuck with local computing, and stuck with a relatively complex platform. I expect Apple, for at least the short to middle term, to stay with this model. That means that apps on iPhone and iPad will stay in the local computing model, when means that they will be complex -- and difficult to support.

In the long run, I think Apple will move to a cloud-based model of computing, but only after everyone else, and only when Apple starts losing business. It will be a difficult transition, and one that may require new management of Apple. Look for a run of quarters with disappointing earnings, and a change in leadership, before Apple changes its App Store policies.

Thursday, June 11, 2020

The computer of Linus Torvalds

My experience as developer ranges from solo artist to member of large, enterprise projects. That experience has given me various insights about hardware, operating systems, programming languages, teamwork, and management.

One observation is about a combination of those aspects, specifically hardware and development teams: The minimum hardware requirements for a system are (most likely) the hardware that the developers are using. If you equip developers with top-of-the-line hardware, the system when delivered will require top-of-the-line hardware to run acceptably. As a corollary, if you equip developers with mid-line hardware, the delivered system will run acceptably on that level of hardware.

Developers may often complain about slow hardware, and point out that top-level hardware is not that expensive, and may actually reduce expenses once you factor in the time to pay developers to wait for slow compiles and tests. That is a valid point, but it loses sight of the larger point of a system that performs for a user with hardware that is less than top-of-the-line.

With fast hardware, developers do not see the performance problems. With slower hardware, developers are aware of performance issues, and build a better system. (Or at least one that runs faster.)

Which brings us to Linux Torvalds, the chief developer for the Linux kernel. More specifically, his computer.

A recent article on slashdot lists the specifications of his new computer. It sounds really nice. Fast. Powerful. And just what will lead Torvalds (and Linux) into the "performance trap". Such a computer will hide performance issues from Linus. That may send Linux into a direction that lets it run well on high-end hardware, and not so well on lower-end hardware or older systems.

With a high-end system to run and test on, Torvalds will miss the feedback when some changes have negative affects on performance on slower hardware. Those changes may work "just fine" on his computer, but not so well on other computers.

I recognize that the development effort of the Linux kernel has a lot of contributors, not all of whom have top-level hardware. Those developers may see performance issues. They may even raise them. But do they have a voice? Will their concerns be heard, and addressed? Or will Torvalds reject the issues as complaints and arrogantly tell those developers get "real computers and stop whining". His reputation suggests the latter.

If Torvalds does fall into the "performance trap" it may have significant effects on the future success of Linux. Linux may become "tuned" to high-performance hardware, running acceptably on expensive systems but slow and laggy on cheaper hardware. It may run well on new equipment but poorly on older systems.

That, in turn, may force users of older, slower hardware to re-think their decision to use Linux.