Sunday, November 21, 2021

CPU power and developer productivity

Some companies have issued M1-based Macbooks to developers. Their reason? To improve productivity.

Replacing "regular" PCs with M1-based Macbooks is nice, and it certainly provides developers with more CPU power, but does it really increase the productivity of developers?

In a very simple way, yes, it does. The more powerful Macbooks will let developers compile faster, build deployment packages faster, and run tests faster.

But the greater leaps in developer productivity were not performing steps faster.

Increases in developer productivity come not from raw CPU power, or more memory, or faster network connections, or higher-resolution displays. Meaningful increases come from better tools, which in turn are built on CPU power, memory, network connections, and displays.

The tools that have helped developers become productive are many. They include programming languages, compilers and interpreters, editors, debuggers, version control systems, automated test systems, and communication systems (such as e-mail or chat or streamed messages like Slack).

We build better tools when we have the computing power to support them. In the early days of computing, prior to the invention of the IDE (integrated development environment), the steps of editing and compiling were distinct and handled by separate programs.

Early editors could not hold the entire text of a long program in memory at once, so they had special commands to "page" through the file. (One opened the editor and started with the first "page" of text, made changes and got things right, and then moved to the next "page". It sounds like the common "page down" operation in modern editors, except that a "page" in the old editors was longer than the screen, and -- note this -- there was no "page up" operation. Paging was a one-way street. If you wanted to go back, you had to page to the end of the file, close the file, and then run the editor again.

Increased memory ended the need for "page" operations.

The first integrated environment may have been the UCSD P-System, which offered editing, compiling, and running of programs. The availability of 64K of memory made this system possible. Unfortunately, the CPUs at the time could not support the virtual processor (much like the later JVM for Java) and the p-System never achieved popularity.

The IBM PC, with a faster processor and more memory, made the IDE practical. In addition to editing and  compiling, there were add-ons for debugging.

The 80386 processor, along with network cards and cables, made Windows practical, which made network applications possible. That platform allowed for e-mail (at least within an organization) and that gave a boost to lots of employees, developers included. Networks and shared file servers also allowed for repositories of source code and version control systems. Version control systems were (and still are) an immense aid to programming.

Increases in computing power (CPU, memory, network, or whatever) let us build tools to be more effective as developers. An increase of raw power, by itself, is nice, but the payoff is in the better tools.

What will happen as a result of Apple's success with its M1 systems?

First, people will adopt the new, faster M1-based computers. This is already happening.

Second, competitors will adopt the system-on-a-chip design. I'm confident that Microsoft is already working on a design (probably multiple designs) for its Surface computers, and as a reference for other manufacturers. Google is probably working on designs for Chromebooks.

Third, once the system-on-a-chip design has been accepted in the Windows environment, people will develop new tools to assist programmers. (It won't be Apple, and I think it won't be Google either.)

Where-ever the source, what kinds of tools can we expect? That's an interesting question, and the answer is not obvious. But let us consider a few things:

First, the increase in power is in CPU and GPU capacity. We're not seeing an increase in memory or storage, or in network capacity. We can assume that innovations will be built on processing, not communication.

Second, innovations will probably help developers in their day-to-day jobs. Developers perform many tasks. Which are the tasks that need help? (The answer is probably not faster compiles.)

I have a few ideas that might help programmers.

One idea is to analyze the run-time performance of programs and identify "hot spots" -- areas in the code that take a lot of time. We already have tools to do this, but they are difficult to use and run the system in a slow mode, such that a simple execution can take minutes instead of seconds. (A more complex task can run for an hour under "analyze mode".) A faster CPU can help with this analysis.

Another idea is for debugging. The typical debugger allows a programmer to step through the code one line at a time, and to set breakpoints and run quickly to those points in the code. What most debuggers don't allow is the ability to go backwards. Often, a programming stepping through the code gets to a point that is know to be a problem, and needs to identify the steps that got the program to that point. The ability to "run in reverse" would let a programmer "back up" to an earlier point, where he (or she) could see the decisions and the data that lead to the problem point. Computationally, this is a difficult task, but we're looking at a significant increase in computational power, so why not?

A third possibility is the analysis of source code. Modern IDEs perform some analysis, often marking code that is syntactically incorrect. With additional CPU power, could we identify code that is open to other errors? We have tools to identify SQL injection attacks, and memory errors, and other poor practices. These tools could be added to the IDE as a standard feature.

In the longer term, we may see new programming languages emerge. Just as Java (and later, C#) took advantage of faster CPUs to execute byte-code (the UCSD p-System was a good idea, merely too early) new programming languages may do more with code. Perhaps we will see a shift to interpreted languages (Python and Ruby, or their successors). Or maybe we will see a combination of compiled and interpreted code, with some code compiled and other code interpreted at run-time.

More powerful computers let so do things faster, but more importantly, they let us do things differently. They expand the world of computation. Let's see how we use the new world given to us with system-on-a-chip designs.

No comments: