Tuesday, September 28, 2021

Chromebooks

Google has the Chromebook, a lightweight laptop that runs Chrome (and nothing else).

Why is Google the only supplier of Chromebooks? Or more specifically, why is it that only Chrome has this arrangement? Why is there no lightweight laptop for Windows that runs only Edge (and perhaps Remote Desktop and Powershell and nothing else)? Why is there no lightweight laptop that runs Apple's Safari browser (and nothing else)? Why is there no lightweight laptop that runs Firefox (and nothing else)? I recognize that hardware manufacturers, in coordination with Google, provide the Chromebook. Therefore, technically, Lenovo and Dell and Samsung (and others) are suppliers of the Chromebook. But you know what I mean.

Competitors to the Chromebook need three things: the browser, the operating system, and the hardware. None are trivial. All must work together.

Google has succeeded in building the complete stack (hardware, operating system, and browser) and also  provides web-based applications. Users of Chromebooks have the tools and they have destinations.

Can Microsoft build an equivalent stack? The apparent answer is "no". Microsoft tried first with the original Surface tablet (the "Surface RT"), second with "Windows S mode", and third with "Windows 10X". (All were not quite equivalent to the Chromebook, as they ran more than just the browser, but they did run a subset of Windows applications.) The first two were rejected by customers; the last was killed before it was released. Windows 11, with its requirements for powerful processors, will not be available for an inexpensive, lightweight, browser-centric experience. I doubt that Microsoft will introduce a new operating system (or maintain a slimmed version of Windows 10) for a low-margin market.

Can Apple build a lightweight laptop that runs only a browser? I think the strict answer is "yes". Apple has the technical talent to build such a stack. I'm not sure that Apple could convince users to switch from their current hardware to a lightweight laptop (a "SafariBook"?). I am confident that Apple makes more money by selling the current hardware (and apps to run on that hardware), so they have no incentive to switch customers to a Chromebook-like laptop. So while Apple could build and sell a "SafariBook", they won't. There is more profit in heavyweight laptops.

Mozilla is in a poor position to design and sell a browser-oriented laptop (a "MozillaBook"?). They have the brower, but not the operating system or the hardware. They need manufacturers such as Dell or Samsung to build the hardware, and those manufacturers may decline, fearing Google's wrath. They may be able to leverage Linux for an operating system, much as Google did, but it would be a significant (read that as "expensive") effort.

The makers of other browsers face a harder challenge than Mozilla faces. Not only do they need the operating system and the hardware, their browsers have tiny market share. Assuming that the expected customer base for a boutique-browser laptop would be their current user base, the development costs for a laptop would be difficult to amortize over the units sold. Laptops with the Opera browser, for example, would be more expensive that the typical Chromebook.

Amazon has done impressive things with its Kindle book readers and Fire tablets, but has not introduced a Chromebook-like laptop. Probably because its book readers and tablets lock users into the Amazon system for purchases of books and music, and that is not possible with a browser.

So my conclusion is that we're stuck with Google Chromebooks, with no hope for a competing product. Our choice is a Chromebook or a laptop with a full-sized operating system. My depressing forecast is that we will never see a competitor to the Chromebook.

I would be happy to be proven wrong.


Tuesday, September 21, 2021

Functional programming requires different discipline

I am worried about the programming industry. Specifically, I fear that we may have trapped ourselves in the object-oriented programming paradigm. I can explain, but it will take some time. Please bear with me.

Paul Graham has written several articles, most of them (if not all of them) thoughtful and thought-provoking. Some of them discuss programming languages. One of them (I cannot find it now) discussed the idea of programming languages and how different languages hold different constructs. If I remember correctly, Graham posited that older programming languages were simpler and modern languages had constructs that were not available in older languages.

If you "stacked" the languages from oldest (on the bottom) to newest (on the top) then programmers could pick their favorite language and look up (at newer languages) and down (at older languages). Graham commented that from such a view, the older languages made sense, because one's favorite language (from which one looked down) contained the things in the older languages. Older languages had constructs that were identical or similar to your language. They didn't have all of the constructs, but the ones that they did have were recognizable.

Conversely (Graham continued), looking up the stack at newer languages was less clear. Those newer languages had constructs and ideas that weren't in your favorite language, and weren't known to you. They looked weird.

I'm not sure I agree completely with this idea. If we start with a simple stack of assembly language, FORTRAN, COBOL, BASIC, Pascal, C, C++, and Java, we can see that yes, later languages have features that are not available in earlier languages. Pascal lets you define records and sets, which are not available in BASIC or FORTRAN. C++ and Java have classes and allow overloading of functions. But the view from new to old is not always clear. BASIC had GOTO, which has all but been removed from modern languages. COBOL had level-88 values, which do not exist in newer languages. FORTRAN-66 had the arithmetic IF statement with its three-way branching, which looks weird from any other language, old or new. The simple rule "old is familiar and newer is strange" doesn't hold.

Instead of programming languages, perhaps it is better to think about programming paradigms. These are broader and not tied to a single language. I can see four major programming paradigms: simple procedural programming, structured programming, object-oriented programming, and functional programming. We developed these paradigms in that sequence, with procedural programming in the 1960s (FORTRAN, COBOL, BASIC), structured programming in the 1970s (Pascal, C), object-oriented programming in the 1990s (C++, Java, C#), and functional programming in the 2000s (Haskell, Erlang). [The lists of languages are not complete, and the years reflect the popularity of the languages, not the initial design.]

My personal history with programming had all of these paradigms. Over time, I moved from one paradigm to another. I started with simple procedural programming (BASIC, FORTRAN) and later moved to structured programming (Pascal, C). That transition was fairly easy. I understood the concepts and had only to learn discipline to apply them.

Later, I moved to object-oriented programming (C++, Java, C#). Changing from structured programming to object-oriented programming was more difficult. That may have been due to age (I was fifteen years older) or perhaps the learning method (I learned structured programming in a classroom; object-oriented programming was on my own). It may be that object-oriented programming is that much harder that structured programming. But learn it I did.

Recently I have looked into functional programming, but only out of curiosity. I'm not using functional programming for any serious projects. (At least not yet.)

Notice that my progress was moving up the stack of programming paradigms. I started with the simplest paradigm (procedural) and moved to structured programming and then to object-oriented programming.

Functional programming is different from the other programming paradigms. Procedural programming is the base, upon which is built structured programming. Object-oriented programming is built on top of structured programming. The three of them make a nice little tower.

But functional programming is not an extension of object-oriented programming. It doesn't include the concepts of OOP (data-hiding and polymorphism). You don't build function-oriented programs out of classes. (I recognize that there are programming languages that combine functional programming and object-oriented programming, but they are hybrids. More on them later.)

Functional programming, in my view, is an alternative to structured programming. It sits by the side of our tower of paradigms, not on top. Just as structured programming imposed discipline on procedural programming, function programming imposes discipline on procedural programming -- but a different set of rules.

To transition from object-oriented programming to functional programming, one has to forget object-oriented programming, and then forget the discipline of structured programming, and then -- and only then -- can one learn the discipline of functional programming. (The hybrid of OOP and FP is another level on top of functional programming, much as OOP is another level on top of structured programming. We're building another tower of programming paradigms, with procedural at the bottom, then functional programming, and then object-oriented/functional programming on top.)

Let's get back to the transition from object-oriented programming to functional programming, by forgetting object-oriented programming and structured programming and building functional programming on top of procedural programming.

That transition is difficult, but not impossible. At least, not for me, because of the sequence in which I learned programming. I started with procedural programming, and then added structured programming and then object-oriented programming. In my brain, I can ignore the later additions and recall pure procedural programming. (I can still write programs in BASIC.)

But that is me. I suspect it holds for programmers who followed that same path.

What about other programmers?

For the past two decades, programmers have been taught object-oriented programming, and nothing else. They were not taught procedural programming, followed by structured programming, followed by object-oriented programming. No indeed. The primary programming languages for learning was C++, later replaced by Java, later replaced by C#, and today Python (and possibly JavaScript).

We have been teaching people how to write object-oriented programs, and nothing else. That's going to make the transition to functional programming extremely difficult. Modern-day programmers, who learned only object-oriented programming, cannot simply forget "the new bits" and revert to the old style or procedural programming.

One cannot simply jump from an object-oriented programming language into a functional programming language. For programmers who know object-oriented programming and nothing else, they cannot "forget the new stuff and learn the new bits" because everything, to them, is connected. Switching to functional programming is a complete reset. They have to forget everything (well, almost everything) and start over.

That's a big task.

I'm not saying that functional programming is wrong, or that we shouldn't use it. Nor am I saying that only the really smart programmers should attempt to move from object-oriented programming to functional programming.

My point is that the transition requires one to unlearn a lot of techniques, techniques that have become habits. The transition is a large change, and will require time, effort, and patience.

Friday, September 17, 2021

An iPad is not a Chromebook

Apple announced some new products this week. With the announcements, Apple made some claims about performance.

Apple compared the iPad to the Chromebook. Specifically, Apple claimed that the performance of the iPad was superior to the Chromebook.

The comparison is a misdirection. It doesn't make sense to compare an iPod to a Chromebook. And with changes in the Windows world, it doesn't make sense to compare MacBooks to Windows laptops, either.

Here's why: Apple tablets and Chromebooks use two different models of computing. Apple designs its applications to run on the local device. (Apple raised this point in their presentation, to emphasize privacy.) Chromebooks are designed to run apps on the web. For iPhones and iPads, the capabilities of the local processor is important. Apple needs hefty processors in its phones and tablets. For Chromebooks, its more important to look at the cloud servers and the network connection. Google needs hefty processors, but in the servers. The processor in the Chromebook need only be powerful enough to send and receive data, and to render the screen image.

Google isn't alone in moving processing to web servers (or cloud servers). Microsoft is doing the same thing with its Office products and applications such as Teams. The computing model for Windows started as local processing, back in the 1990s. Today, some processing occurs on the local system and some processing occurs in the cloud. 

More and more, comparing Apple laptops to Windows laptops and comparing Apple phones to Android phones is comparing apples to oranges. (If you can forgive the pun.)

The difference in computing models guides the design for hardware. Apple has to develop fast processors for its tablets and laptops -- all of the processing occurs there. Microsoft and Google don't have the same pressure, because they can shift heavy processing to cloud servers. Google shifts almost all processing to servers, and Microsoft is gradually redesigning its applications to take advantage of cloud servers. The result is that Microsoft and Google don't need superfast processors in laptops and tablets. (Some buyers of Windows PCs, especially gamers, may seek the fastest processor, but for the average tasks, the processor is unimportant.)

I'm a little confused by Apple's comparison of its new processors to Chromebooks. Apple made a big point about it, but it doesn't make sense. A better comparison would be Apple comparing the new phone and tablet to previous generations of Apple's iPhone and iPad.

Unless -- Perhaps the new processors in Apple's latest tablet and phone are not that much faster than the previous processors. Perhaps the new A15 processor is mildly faster that the A14. In that case, the comparison between A14 and A15 would be... unimpressive. It might be that comparing the new iPad to a Chomebook makes for better marketing. Apple can throw out impressive-sounding factoids such as "30% faster than the most popular Chromebook".

I'm sure that journalists and Apple enthusiasts will compare the new iPhones and iPads to their predecessors and report the results. We should see those results shortly after the new iPhones and iPads become available.

What if the third-party, apples-to-apples comparisons show that the new phones and tablets are only slightly faster than the previous generation? Should we abandon Apple equipment? Not at all. Apple equipment is still well-designed and well-engineered. But we should take a more skeptical view of the information that Apple provides in its hour-long "event" advertisements.

Looking ahead, we can expect a similar hour-long event (advertisement) for the next generation of Macbook laptops and Mac desktops. As with phones and tablets, comparing the performance of a Macbook to a Windows laptop is not always meaningful. (Especially if the Windows laptop uses a lot of server-based apps, or virtual workstations.) If Apple touts the performance of the new Macbook against a Windows laptop (or worse, a Chromebook) then I expect the performance improvement of the new Macs will be... unimpressive. On the other hand, if Apple compares apples to apples, and provides impressive performance comparisons of new Macbooks against older Macbooks, then we can be reasonably certain that the new Macs perform much better.

Let's see what happens at the next Apple advertising event.

Thursday, September 9, 2021

Remote work and employer credibility

Companies (many of them) want employees to return to the office. Employees (many of them) want to continue working from home. There is a danger here, because managers and workers are looking at different things, and that difference can lead to a loss of confidence and even a loss of credibility.

Managers see little reason to delay returning to the office. (This was before the latest wave of Covid from the "delta" variant.) A common reason given is to "improve productivity". There may be some gains with people working in an office, but the argument is lacking. I suspect senior managers, knowing that the accountants are looking at expenses, know that they risk losing buildings and office space. (If the office is empty, why continue to pay for it?) But let's go with the "improve productivity" argument.

Employees feel differently. They think that they are more productive working from home. Not having commutes in the morning and evening help this perception. Shifting back to the office means that employees will have to wake earlier, drive (or take the bus) to the office, possibly pay tolls or parking, and then make the same trip in the evening. They lose perhaps two hours each day to the commute. Their productivity drops, as they would be doing the same work but in ten hours, not eight.

So when managers say "everyone must come back to the office" and employees ask "why" and managers say "for productivity" there is a definite discrepancy. Employers may see a modest gain in productivity (and a possible reduction in expense as they cancel contracts for conferencing software) but employees see a large decrease in productivity (due to commute times) and an increase in expenses (gasoline, transit fare, tolls, lunches). Employees also have to wear nicer clothes -- no more t-shirts and torn jeans!

I suspect that few senior managers are considering the change from an employee's point of view. Most probably think that "going back to work-in-the-office" is easy, as employees were doing this prior to the Covid-19 pandemic.

I also suspect that few employees are thinking in terms of economics, and instead simply have the feeling that work is better and more productive with "work-from-home".

The discrepancy about productivity remains, whether we analyze it via numbers or via emotions. And that discrepancy is a problem. Employers claim that "work in the office" gives better productivity, and employees thing "work from home" gives the better productivity. When senior managers call workers back to the office and claim "higher productivity in the office", employees don't believe them.

The worst position a management team could take is probably the position of "we can't operate with employees in remote locations". That is patently false, as the company has been doing just that for more than a year.

But the softer "improved productivity" argument also has problems. Employees don't see it that way, and once employees don't believe one thing senior managers claim, they start to question everything that senior managers claim.

I think that managers can avoid this loss of credibility. I think it is managers that must take steps to avoid the problem, not employees.

First, managers must recognize that they are asking employees to shoulder the costs of commuting to the office. They must also recognize that while employees were willing to bear these costs prior to the pandemic, they did not have to pay them while working from home, and asking them to come to the office means paying those costs again. The change from work-from-home to work-in-the-office is not cost-free to employees.

Second, managers should recognize that some employees are more productive while working from home, and working from home can be an effective way to contribute to the company. Some employees may welcome a return to the office, and may be more productive there. That does not necessarily hold for all employees. If managers care about productivity, they will work with employees and craft policies that maximize productivity.

Third, managers must recognize that business relationships, including employer-employee relationships, are built on trust. They must act in ways that establish and nurture that trust. They must maintain credibility. If they don't, they face a significant loss of productivity, either through attrition or through apathy or even hostility.

It's nice to think that we can simply go back to the way things were before Covid-19. The world has changed, and we cannot go back. We must move forward, with the knowledge that the work-from-home has given us.

Monday, September 6, 2021

Employee productivity and remote work

The first question is: how does one know that employees, working remotely, are in fact working?

It's not a trivial question, and the answer is far from simple. Too many managers, I suspect, think that their remote employees are "goofing off" during the time they claim to be working. Some managers may believe that employees are working only part time (yet getting paid for a full time job). A few managers may suspect their employees of working multiple jobs at the same time.

It is probable that all of the above scenarios are true.

The second and third questions are: Do managers care? And should managers care?

Managers probably do care. One can hear the complaints: "Employees are goofing off!" or "Employees are not working when they should be!". Or the shrill: "How dare my employees work for someone else!"

All of these complaints condense into the statement "That's not fair!"

Managers and companies have, naturally, installed mechanisms to prevent such abuse of remote work. They monitor employees via webcam, or -- in a particularly Orwellian turn -- use AI to monitor employees via webcam.

It strikes me that these measures assume that they can obtain good results (employees working) by eliminating bad behaviors (employees walking the dog or making a cup of coffee). They subtract the bad and assume that the remains is good. Theoretically, that sums.

But do managers want merely an employee sitting at a computer for an eight-hour shift? (And doing nothing else?) If so, I am willing to work for such companies. I am quite capable of sitting at a computer for hours. I can even appear to be busy, typing and clicking.

Managers will answer that they want more than that. Managers will (rightly) say that they need results, actual work from the employee that provides value to the company.

Can managers quantify these results? Can they provide hard numbers of what they expect employees to do? For some jobs, yes. (Call centers rate employees by number of calls taken and resolved, for example.) Other jobs are less easy to measure. (Programming, for example. Does one measure lines of code? Number of stories? Number of stories weighted by size? Does quality of the code count? Number of defects found during testing?)

It's easy to take the approach of "remove the bad behaviors and hope for the best". One can purchase tools to monitor employees. (It's probably easy to justify such purchases to budget committees.)

But perhaps some of the effort and expense of monitoring bad behavior could be redirected to measuring good results. Business is, after all, about results. If Programmer A produces the same results as Programmer B, but in 75 percent of the time, why not let Programmer A have some time to make coffee?

Another thought:

Lots of employees in the tech world are paid as salaried employees. Compensation by salary (as opposed to hourly wages) implies that the employee has some discretion over their work.

If employers push the monitoring tools, employees may decide that their compensation should be hourly, instead of salary. Companies won't like that arrangement, as they have been using salaried compensation to extract unpaid overtime from workers. But workers have some say in the issue. If employers focus on hours worked, then employees will focus on hours worked.