Thursday, September 9, 2021

Remote work and employer credibility

Companies (many of them) want employees to return to the office. Employees (many of them) want to continue working from home. There is a danger here, because managers and workers are looking at different things, and that difference can lead to a loss of confidence and even a loss of credibility.

Managers see little reason to delay returning to the office. (This was before the latest wave of Covid from the "delta" variant.) A common reason given is to "improve productivity". There may be some gains with people working in an office, but the argument is lacking. I suspect senior managers, knowing that the accountants are looking at expenses, know that they risk losing buildings and office space. (If the office is empty, why continue to pay for it?) But let's go with the "improve productivity" argument.

Employees feel differently. They think that they are more productive working from home. Not having commutes in the morning and evening help this perception. Shifting back to the office means that employees will have to wake earlier, drive (or take the bus) to the office, possibly pay tolls or parking, and then make the same trip in the evening. They lose perhaps two hours each day to the commute. Their productivity drops, as they would be doing the same work but in ten hours, not eight.

So when managers say "everyone must come back to the office" and employees ask "why" and managers say "for productivity" there is a definite discrepancy. Employers may see a modest gain in productivity (and a possible reduction in expense as they cancel contracts for conferencing software) but employees see a large decrease in productivity (due to commute times) and an increase in expenses (gasoline, transit fare, tolls, lunches). Employees also have to wear nicer clothes -- no more t-shirts and torn jeans!

I suspect that few senior managers are considering the change from an employee's point of view. Most probably think that "going back to work-in-the-office" is easy, as employees were doing this prior to the Covid-19 pandemic.

I also suspect that few employees are thinking in terms of economics, and instead simply have the feeling that work is better and more productive with "work-from-home".

The discrepancy about productivity remains, whether we analyze it via numbers or via emotions. And that discrepancy is a problem. Employers claim that "work in the office" gives better productivity, and employees thing "work from home" gives the better productivity. When senior managers call workers back to the office and claim "higher productivity in the office", employees don't believe them.

The worst position a management team could take is probably the position of "we can't operate with employees in remote locations". That is patently false, as the company has been doing just that for more than a year.

But the softer "improved productivity" argument also has problems. Employees don't see it that way, and once employees don't believe one thing senior managers claim, they start to question everything that senior managers claim.

I think that managers can avoid this loss of credibility. I think it is managers that must take steps to avoid the problem, not employees.

First, managers must recognize that they are asking employees to shoulder the costs of commuting to the office. They must also recognize that while employees were willing to bear these costs prior to the pandemic, they did not have to pay them while working from home, and asking them to come to the office means paying those costs again. The change from work-from-home to work-in-the-office is not cost-free to employees.

Second, managers should recognize that some employees are more productive while working from home, and working from home can be an effective way to contribute to the company. Some employees may welcome a return to the office, and may be more productive there. That does not necessarily hold for all employees. If managers care about productivity, they will work with employees and craft policies that maximize productivity.

Third, managers must recognize that business relationships, including employer-employee relationships, are built on trust. They must act in ways that establish and nurture that trust. They must maintain credibility. If they don't, they face a significant loss of productivity, either through attrition or through apathy or even hostility.

It's nice to think that we can simply go back to the way things were before Covid-19. The world has changed, and we cannot go back. We must move forward, with the knowledge that the work-from-home has given us.

Monday, September 6, 2021

Employee productivity and remote work

The first question is: how does one know that employees, working remotely, are in fact working?

It's not a trivial question, and the answer is far from simple. Too many managers, I suspect, think that their remote employees are "goofing off" during the time they claim to be working. Some managers may believe that employees are working only part time (yet getting paid for a full time job). A few managers may suspect their employees of working multiple jobs at the same time.

It is probable that all of the above scenarios are true.

The second and third questions are: Do managers care? And should managers care?

Managers probably do care. One can hear the complaints: "Employees are goofing off!" or "Employees are not working when they should be!". Or the shrill: "How dare my employees work for someone else!"

All of these complaints condense into the statement "That's not fair!"

Managers and companies have, naturally, installed mechanisms to prevent such abuse of remote work. They monitor employees via webcam, or -- in a particularly Orwellian turn -- use AI to monitor employees via webcam.

It strikes me that these measures assume that they can obtain good results (employees working) by eliminating bad behaviors (employees walking the dog or making a cup of coffee). They subtract the bad and assume that the remains is good. Theoretically, that sums.

But do managers want merely an employee sitting at a computer for an eight-hour shift? (And doing nothing else?) If so, I am willing to work for such companies. I am quite capable of sitting at a computer for hours. I can even appear to be busy, typing and clicking.

Managers will answer that they want more than that. Managers will (rightly) say that they need results, actual work from the employee that provides value to the company.

Can managers quantify these results? Can they provide hard numbers of what they expect employees to do? For some jobs, yes. (Call centers rate employees by number of calls taken and resolved, for example.) Other jobs are less easy to measure. (Programming, for example. Does one measure lines of code? Number of stories? Number of stories weighted by size? Does quality of the code count? Number of defects found during testing?)

It's easy to take the approach of "remove the bad behaviors and hope for the best". One can purchase tools to monitor employees. (It's probably easy to justify such purchases to budget committees.)

But perhaps some of the effort and expense of monitoring bad behavior could be redirected to measuring good results. Business is, after all, about results. If Programmer A produces the same results as Programmer B, but in 75 percent of the time, why not let Programmer A have some time to make coffee?

Another thought:

Lots of employees in the tech world are paid as salaried employees. Compensation by salary (as opposed to hourly wages) implies that the employee has some discretion over their work.

If employers push the monitoring tools, employees may decide that their compensation should be hourly, instead of salary. Companies won't like that arrangement, as they have been using salaried compensation to extract unpaid overtime from workers. But workers have some say in the issue. If employers focus on hours worked, then employees will focus on hours worked.

Monday, August 30, 2021

Apple employees and iCloud

Recent stories about Apple employees have exposed some of the conditions that Apple requires for employment. One of those conditions is to link one's personal iCloud account to the Apple-assigned employee iCloud account.

Why do this? It seems an odd request, to merge one's personal iCloud account with the corporate iCloud account. Apple has not explained the reason for this policy (I did not ask Apple for comment) and Apple employees have not stated a reason either.

I have an idea. It's not that Apple is being evil. Instead, Apple is trying to keep secrets, secret.

Apple, more than most companies, is concerned with corporate secrets. It carefully guards its designs for new products. It limits communication between employees to only those groups that are part of the work assignment.

Apple also requires employees to agree to the "Apple can search you or your equipment whenever on Apple property" clause in its hiring contracts.

Apple does not want secrets to leak out, and it takes steps to keep secrets, well, secret.

I expect that Apple requires employees to use the corporate e-mail system and not personal e-mail accounts such as Yahoo, GMail, or Microsoft Outlook. Apple can scan messages in its internal e-mail system and identify leaked information, but it cannot scan external services. It probably blocks those web sites on its internal network.

Apple probably also blocks social media services such as Facebook and Twitter. When you're at work for Apple, you're at work for Apple, and not allowed to use such sites. Information could be shared via those sites, something Apple wants to prevent.

File sharing sites such as DropBox, BOX, and Microsoft OneDrive are other services that could allow for the sharing of information. Apple probably blocks those sites, too.

Apple may go as far as preventing people from attaching USB drives to their work computers.

So Apple has built an electronic fence around employees, to keep corporate date inside and not allow designs for new products to escape. That all makes sense.

The one problem in this scheme is iCloud.

Apple cannot block iCloud. They use it to share data and collaborate on projects. Blocking iCloud would isolate employees and limit the sharing of information to e-mail, which is far less effective for sharing data. iCloud can let multiple people work on the same file, at the same time, and always with the latest version of the file. Forcing people to share data via e-mail would eliminate real-time collaboration and allow for older versions of files -- and the confusion that would result.

So Apple must use iCloud. But iCloud allows people to switch from one iCloud account to another, and employees could switch from the corporate iCloud account to their personal account, send files out (just what Apple does not want), and then switch back to the corporate iCloud account. This is a "hole" in the security "fence".

The answer to this last problem is simple: require employees to link their personal iCloud to the corporate iCloud account. I imagine that their personal account, once linked, becomes part of the monitoring process for the corporate iCloud account, and closes the "hole" in the "fence".

This is why (I think) Apple requires employees to link their personal iCloud account to their corporate iCloud account. Linking accounts is a hack to close a hole in Apple's security.


Friday, August 27, 2021

Replacing a language

Do programming languages ever get replaced? Languages are introduced, and some rise and fall in popularity (and some rise after a fall), but very few ever disappear completely.

There are some languages that have disappeared. Two are FLOW-MATIC and NELIAC, languages from the 1950s.

Some languages have fallen to very low levels of usage. These include BASIC, PL/I, PL/M, and DIBOL.

Some languages never die (or at least haven't died yet). The go-to examples are COBOL and FORTRAN. Organizations continue to use them. Fortran is used for some parts of the R implementation, so anyone using R is in a sense using Fortran. We can include SQL in this list of "immortal" programming languages.

Some languages are replaced with new versions: C# version 6 replaced a very similar C# version 5, for example. VB.NET replaced VB6. And while there are a few people who use the old Visual Basic 6, they are a very small number.

The replacement of a language isn't so much about the language as it is about popularity. Several groups measure the popularity of languages: Tiobe, Stack Overflow, the Popularity of Programming Languages (PYPL)... They all rank the languages and show pretty tables and charts.

The mechanics of popularity is such that the "winners" (the most popular languages) are shown, and the "losers" (the less popular languages) are omitted from the table, or shown in a small note at the bottom. (Not unlike the social scene in US high schools.)

If we use this method, then any language that enters the "top 10" or "top 40" (or "top whatever") chart is popular, and any language that is not in the list is not popular. A language that never enters the "top" list is never really replaced, because it never "made it".

A language that does enter the "top" list and then falls out of it has been replaced. That language may still be used, perhaps by thousands, yet it is now considered a "loser".

For this method (referring to the Tiobe index), languages that have been replaced include: Objective-C, COBOL, Prolog, and Ada. They were popular, once. But now they are not as popular as other languages. (I can almost hear the languages protesting a la Norma Desmond: "We are big! It's the programs that got small!")

Sometimes we can state that one specific language replaced another. VB.NET replace VB6, because Microsoft engineered and advertised VB.NET to replace VB6. The same holds for Apple and Swift replacing Objective-C. But can we identify a specific language that replaced COBOL? Or a single language that replaced FORTRAN? Did C++ replace C? Is Rust replacing C++?

We can certainly say that, for a specific project, a new language replaces an older one. Perhaps we can say the same for an organization, if they embark on a project to re-write their code in a new language. I'm not sure that we can make the assertion for the entire IT industry. IT is too large, with too many companies and too many individuals, to verify such a claim in absolute terms. All we can rely on is popularity.

But popularity is not the measure of a language. It doesn't measure the language's capabilities, or its reliability, or its ability to run on multiple platforms.

We don't care about popularity for the technical aspects of the language. We care about the popularity of a language for us, for ourselves.

Product managers care about the popularity of a language because of staffing. A popular language will have lots of people who know it, and therefore the manage will have a (relatively) easy time of finding candidates to hire. An obscure language will have few candidates, and they may demand a higher wage.

Individuals care about the popularity of a language because it means that there will be lots of companies hiring for that skill. Few companies will hire for an obscure programming language.

This essay has meandered from replacing a language to popularity of languages to the concerns of hiring managers and candidates. That's probably not a coincidence. Economic activity drives a lot of behavior; I see no reason that programming should be exempt. When thinking about a programming language, think about the economics, because that will contribute a lot to the language's life.

Tuesday, August 24, 2021

After the PC

Is the M1 Mac a PC? That is, is the new Macbook a personal computer?

To answer that question, let's take a brief trip through the history of computing devices.

Mainframes were the first commercially viable electronic computing devices. They were large and consisted of a processing unit and a few peripherals. The CPU was the most important feature.

Minicomputers followed mainframes, and had an important difference: They had terminals as peripherals. The most important feature was not the CPU but the number of terminals.

PCs were different from minicomputers in that they had integrated video and and integrated keyboard. They did not have terminals. In this sense, M1 Macs are PCs.

Yet in another sense, M1 Macs are something different: They cannot be modified.

IBM PCs were designed for modification. The IBM PC had slots for accessory cards. The information for those slots was available. Other manufacturers could design and sell accessory cards. Consumers could open the computer and add hardware. They could pick their operating system (IBM offered DOS, CP/M-86, and UCSD p-System).

The openness of the IBM PC was somewhat unusual. Apple II (Apple ][) computers were not openable. Macs were not openable (you needed a special tool). Other computers of the era were typically built in a way that discouraged modifications.

The IBM PC set the standard for personal computers, and that standard was "able to be opened and modified".

The M1 Macs are systems on a chip. The hardware cannot be modified. The CPU, memory, disk (well, let's call it "storage", to use the old mainframe-era term) are all fixed. Nothing can be replaced or upgraded.

In this sense, the M1 Mac is not a PC. (Of course, if it is not a PC, then what do we call it? We need a name. "System-on-a-chip" is too long, "SoC" sounds like "sock" and that won't do, either.)

I suspect that the folks at Apple will be happy to refer to their products with a term other than "PC". Apple fans, too. But I don't have a suitable, generic, term. Apple folks might suggest the term "Mac", as in "I have a Mac in my backpack", but the term "Mac" is not generic. (Unless Apple is willing to let the term be generic. If so, when I carry my SoC Chromebook, I can still say "I have a Mac in my backpack." I doubt Apple would be happy with that.)

Perhaps the closest thing to the new Apple M1 Macs is something so old that it predates the IBM PC: The electronic calculator.

Available in the mid-1970s, electronic calculators were quite popular. Small and yet capable of numeric computations, they were useful for any number of people. Like the Apple M1 Mac, they were designed to remain unopened by the user (except perhaps to replace batteries) and they were not modifiable.

So perhaps the Apple M1 Macbooks are descendants of those disco-era calculators.

* * * * *

I am somewhat saddened by the idea that personal computers have evolved into calculators, that PCs are not modifiable. I gained a lot of experience with computers by modifying them: adding memory, changing disk drives, or installing new operating systems.

A parallel change occurred in the automobile industry. In the 1950s, lots of people bought cars and tinkered with them. They replaced tires and shock absorbers, adjusted carburetors, installed superchargers and turbochargers, and replaced exhaust pipes. But over time, automobiles become more complex and more computerized, and now very few people get involved with their cars. (There are some enthusiastic car-hackers, but they are few in number.)

We lost something with that change. We lost the camaraderie of learning together, of car clubs and amateur competitions.

We lost the same thing with the change in PCs, from open, modifiable systems to closed, optimized boxes.

Wednesday, August 18, 2021

Apple's trouble with CSAM is from its approach to new features

Apple's method, its motus operandi, is to design a product or process and present it to its customers. There is no discussion (at least none outside of Apple), there is no gathering of opinion, there is no debate. Apple decides, implements, and sells. Apple dictates, customers follow.

We can see this behavior in Apple's hardware.

Apple removed the floppy disk from the Macintosh. Apple removed the CD drive from the iMac. Apple removed the audio port on the iPhone.

We can also see this behavior in Apple's software.

Apple designs the UIs for iOS and iPadOS and has very definite ideas about how things ought to be.

This method has served Apple well. Apple designs products, and customers buy them.

Yet this method failed with CSAM.

Apparently, some customers don't like being dictated to.

One aspect of the CSAM change is that it is a change to an existing service. If Apple changes the design of the iPhone (removing the audio port, for example), customers still have their old audio-port-equipped iPhone. They can choose to replace that iPhone with a later model -- or not.

If Apple had introduced a new file storage service, and made CSAM scanning part of that service, customers could choose to use that new service -- or not.

But Apple made a change to an existing service (scanning photos in iCloud), and existing iPhones (if Apple scans the photos on the phone). That presents a different calculation to customers. Instead of choosing to use a new service or not, customers must now choose to stop using a service, or continue to use the modified service. Changing from one cloud storage service to another is a larger task. (And possibly not possible with iPhones and iPads.)

Customers don't have the option of "buying in" to the new service. The changes for CSAM are not something that one can put off for a few months. One cannot look at the experience of other customers and then decide to participate.

No, this change is Apple flexing its muscles, altering the deal that was already established. ("Pray that I do not alter it further," quipped Darth Vader in a movie from a long time ago.)

I think that the negative reaction to Apple's CSAM strategy is not so much the scanning of photos, but the altering of the existing service. I think that people now realize that Apple can, and will, alter services. Without the consent, or even the advice, of the customers. I think that is what is bothering people.

Sunday, August 15, 2021

COBOL and Elixir

Someone has created a project to transpile (their word) COBOL to Elixir. I have some thoughts on this idea. But first, let's look at the example they provide.

A sample COBOL program:

      >>SOURCE FORMAT FREE
IDENTIFICATION DIVISION.
PROGRAM-ID. Test1.
AUTHOR. Mike Binns.
DATE-WRITTEN. July 25th 2021
DATA DIVISION.
WORKING-STORAGE SECTION.
01 Name     PIC X(4) VALUE "Mike".
PROCEDURE DIVISION.

DISPLAY "Hello " Name

STOP RUN.

This is "hello, world" in COBOL. Note that it is quite longer than equivalent programs in most languages. Also note that while long, it is still readable. Even a person who does not know COBOL can make some sense of it.

Now let's look at the same program, transpiled to Elixr:

defmodule ElixirFromCobol.Test1 do
  @moduledoc """
  author: Mike Binns
  date written: July 25th 2021
  """

  def main do
    try do
      do_main()
    catch
      :stop_run -> :stop_run
    end
  end 

  def do_main do
    # pic: XXXX
    var_Name = "Mike"
    pics = %{"Name" => {:str, "XXXX", 4}}
    IO.puts "Hello " <> var_Name
    throw :stop_run
  end
end

That is ... a lot of code. More than the code for the COBOL version! Some of that is due to the exception of "stop run" which in this small example seems to be excessive. Why wrap the core function inside a main() that simply exists to trap the exception? (There is a reason. More on that later.)

I'm unsure of the reason for this project. If it is a side project made on a whim, and used for the entertainment (or education) of the author, then it makes sense.

But I cannot see this as a serious project, for a couple of reasons.

First, the produced Elixir code is longer, and in my opinion less readable, than the original COBOL code. I may be biased here, as I am somewhat familiar with COBOL and not at all familiar with Elixir, so I can look at COBOL code and say "of course it does that" but when I look at Elixir code I can only guess and think "well, maybe it does that". Elixir seems to follow the syntax for modern scripting languages such as Python and Ruby, with some unusual operators.

Second, the generated Elixir code provides some constructs which are not used. This is, perhaps, an artifact of generated code. Code generators are good, up to a point. They tend to be non-thinking; they read input, apply some rules, and produce output. They don't see the bigger picture. In the example, the transpiler has produced code that contains the variable "pics" which contains information about the COBOL programs PICTURE clauses, but this "pics" variable is not used.

The "pics" variable hints at a larger problem, which is that the transpiled code is not running the equivalent program but is instead interpreting data to achieve the same output. The Elixir program is, in fact, a tuned interpreter for a specific COBOL program. As an interpreter, its performance will be less than that of a compiled program. Where COBOL can compile code to handle the PICTURE clauses, the Elixir code must look up the PICTURE clause at runtime, decode it, and then take action.

My final concern is the most significant. The Elixir programming language is not a good match for the COBOL language. Theoretically, any program written in a Turing-complete language can be re-written in a different Turing-complete language. That's a nice theory, but in practice converting from one language to another can be easy or can be difficult. Modern languages like Elixir have object-oriented and structured programming constructs. COBOL predates those constructs and has procedural code and global variables.

We can see the impedance mismatch in the Elixir code to catch the "stop run" exception. A COBOL program may contain "STOP RUN" anywhere in the code. The Elixir transpiler project has to build extra code to duplicate this capability. I'm not sure how the transpiler will handle global variables, but it will probably be a method that is equally tortured. Converting code from a non-structured language to a structured programming language is difficult at best and results in odd-looking code.

My point here is not to insult or to shout down the transpiler project. It will probably be an educational experience, teaching the author about Elixir and probably more about COBOL.

My first point is that programs are designed to match the programming language. Programs written in object-oriented languages have object-oriented designs. Programs written in functional languages have functional designs. Programs written in non-structured languages have... non-structured designs. The designs from one type of programming language do not translate readily to a programming language of a different type.

My second point is that we assume that modern languages are better than older languages. We assume that object-oriented languages like C++, C#, and Java are better than (non-OO) structured languages like Pascal and Fortran-77. Some of us assume that functional languages are better than object-oriented languages.

I'm not so sure about those assumptions. I think that object oriented languages are better at some tasks that mere structured languages, and older structured-only languages are better at other tasks. Object-oriented languages are useful for large systems; they let us organize code into classes and functions, and even larger constructs through inheritance and templates. Dynamic languages like Python and Ruby are good for some tasks but not others.

And I must conclude that even older, non-functional, non-dynamic, non-object-oriented, non-structured programming languages are good for some tasks.

One analogy of programming languages is that of a carpenter's toolbox: full of various tools for different purposes. COBOL, one of the oldest languages, might be considered the hammer, one of the oldest tools. Hammers do not have the ability of saws, drills, tape measures, or levels, but carpenters still use them, when the task is appropriate for a hammer.

Perhaps we can learn a thing or two from carpenters.