Monday, July 14, 2025

AI and programmer productivity

In the sixty years of IT, we have seen a number of productivity tools and techniques. Now we're looking at "artificial intelligence" as a way to improve the productivity of programmers. Google, Microsoft, and others are pushing their AI tools upon developers.

Will AI really work? Will it improve programmer productivity? It's not their first attempt to improve the productivity of programmers. Let's look at the history of programming and some of the ways we have improved (or attempted to improve) productivity.

Start with hardware: The first electronic computers were "programmed" by wiring. That is, the hardware was built to perform specific calculations. When you wanted a different calculation, you had to rewire the computer. This was the first form of programming. It's not really an improvement, but we have to start somewhere. Why not at the beginning?

Plug boards: The first productivity improvement was the "plug board". It was a physical board that was plugged into the computer, and it held the wiring for the specific problem. To change a computer's program, one could easily remove one plug board and install a different one. The computer became a general calculation device and the plug boards held the specific calculations - one could call then "programs".

Programs in memory: The next advance was changing the program from wiring (or plug boards) into values stored in the computer. A program consisting of numeric codes could be loaded into memory and then executed. No more plug boards! Programs could be loaded via switches on the computer's front panel, or from prepared paper tapes or punch cards.

Assemblers: But creating the long lists of numbers was tedious. Each operation required its own numeric code, such as 1 to add a number and 5 to store a number to memory. Programmers had to first decide on the sequence of operations and then convert those operations to numeric values. To help programmers (to improve performance) we invented the assembler. The assembler was a program that converted text op-codes into the numeric values that are executed by the computer. (The assembler was also a program that created another program!) Each computer model had its own set of numeric codes and its own assembler.

The first assemblers converted text operation codes to the proper numeric values. But programs are more than just operation codes. Many operations need additional information, such as a numeric constant (for the operation of "add 1 to the accumulator") or a memory address (for the operation "store the value in the accumulator to memory location 1008). It made sense to use names instead of the raw values, so we could write "store accumulator into location named 'total' " as STA TOTAL instead of STA 1008,

Symbols provided to benefits. First, referring to a memory location as "TOTAL" instead of its numeric address made the programs more readable. Second, as the program was revised, the location for TOTAL changed (it was 1008 in the first version, then 1010 in the second version because we needed some memory for other values, and 1015 in a third version). As the real address of TOTAL moved, the symbolic assembler kept up with the changes and the programmer didn't have to worry about them.

Each of those techniques improved productivity and eased the jobs of programmers. But we didn't stop there.

Programming languages: After assemblers, we invented the notion of programming "languages". There were many languages in those early days; Fortran and Cobol are two that we still use today, albeit with enhancements and changes to syntax. We called these "high level" languages to distinguish them from the "low level" assemblers.

We created compilers to convert programs written in high level languages into either machine code or into assembly code (which could then be converted to machine code by the assemblers). We still use high level languages today. We have Rust and Go and C++, which all follow the same process as the early compilers.

But after the invention of programming languages, things changed. The ideas and techniques for improving productivity focussed on the programmers and how they used the compilers, how they stored "source code" (the text programs), and how they interacted with the computer.

Structured programming: Structured programming was not a new language, or a new compiler or assembler, or even a program. It was a set of techniques to write programs that could be better understood by the original author and others. We had decided that programs were hard to read because the sequence of execution was hard to follow, and it was hard to follow because of the GOTO operation, which changed the sequence of control to another part of the program. Avoiding the GOTO became a goal of "good programming". GOTO statements were replaced with IF/THEN/ELSE, WHILE, and SWITCH/CASE statements. These rules were enforced in Pascal (which had a constrained GOTO) and implemented in PL/I, C, and C++ (but they still allowed unconstrained GOTO).

The IDE (integrated development environment:  Prior to the IDE, work on programs was divided between a text editor and the compiler. You would run the editor, make changes, save the file, and exit the text editor. Then your would run the compiler, get a list of errors, and note them. Then run the editor again and fix the errors, then run the compiler again. The development process consisted of alternately running the editor and the compiler. The IDE combined the editor and compiler into a single program, so you could edit and compile quickly. Popularized by Turbo Pascal in the 1980s, they had existed prior in the UCSD p-System. One could even say that BASIC was the first IDE, as it let one edit a program, run the program, and diagnose errors without leaving the BASIC program.

Fourth generation languages: Higher, more abstract than the third generation languages (Cobol, Fortran). SQL is a fourth-generation language, probably the only one we use today. The others were discarded due to poor performance and inability to handle low-level 

Program generators: If compilers take source code and convert it to assembly language (and some of them did), then we can apply the same trick and create a configuration file and feed it into a program generator which generates a program in a third level language (which can then be compiled as usual). Program generators failed bot because they couldn't do the job, but because the very high level languages were very limited in their capabilities. As one colleague said, "program generators do what they do very well, but nothing beyond that".

Source control systems: From the first source control system (sccs) to today's modern tool (git) source control kept previous versions of code and allowed programmers to compare the current version to earlier versions of the code. It allowed programmers to commit changes into a central repository and easily share their updates with other members of the team.

UML (Universal Modelling Language): Similar to program generators, UML was a notation for specifying computation. (The 'U' for 'universal' was the result of combining multiple competing modelling notations.) UML wasn't a programming language -- it wasn't fed into a generator which created the program; instead, it was used by human programmers to create the programs in traditional programming languages. UML was more generic than the configuration files for program generators. But it was not adopted by the industry, for reasons of money, time, and politics.

Object-oriented programming: A way to organize source code for large systems. One might say that it was "structured programming but bigger". Object-oriented programming is one the biggest successes in programming.

Function points: A way to measure the effort to develop programs from the requirements. Function points were a tool not for programmers but for project managers. They calculated estimates for effort, based on easily identified aspects such as inputs, processing steps, and outputs. This was advertised as an improvement over the previous method of intuition or just plain guessing. Function points were unbiased and un-overly-optimistic, and the approach should have been welcomed by managers. Yet managers eschewed function points. There were challenges (tools for all languages were not available, or were expensive) but I believe that the real reason was that the analyses provided by the tools were often higher than managers wanted. Managers did not like the estimates from the function point reports, and reverted back to the older technique of guessing (which could give a number that managers did like).

Looking back, we can see that we have tried various ideas for improving productivity. Many succeeded, some did not.

But from what I've seen, AI seems to be closest to the fourth generation languages and program generators of the past. It creates programs from a specification (an input prompt). Compared to the program generators of the 1970s and 1980s, today's AI tools are much more sophisticated and can generate many more types of programs. Yet they are still limited to the input data used to train them, and AI can go only so far in creating programs. I expect that we will quickly find those limits and become disappointed with AI.

I suspect that AI has a place in programming, probably with junior developers, as aides to develop simple programs. I have yet to be convinced that AI will handle the creation of large-scale, complex systems -- at least not today's version of AI. Future versions of AI may be able to generate large, complex applications; I will wait and see.

Tuesday, July 1, 2025

A lesson from the 1960s

The recent push for AI (artificial intelligence) is forcing us to learn a lesson -- or rather, re-learn a lesson that we learned back in the early days of computing.

In the 1960s, when computers were the shiny new thing, we adored computers. They were superior at computing (compared to us humans) and could calculate much faster and more accurately than us. Computers became the subject of books, movies, magazine articles, and even television programs. They were depicted as large, efficient, and always correct (or so we thought).

We trusted computers. That was the first phase of our relationship.

Yet after some time, we learned that computers were not infallible. They could "make mistakes". Many problems within organizations were blamed on "computer error". It was a convenient excuse, and one that was not easily challenged. They became scapegoats. That was the second phase of our relationship.

Given more time, we realized that computers were tools, and like any tools, they could be used or misused, that they were good at some tasks and not others. We also learned that the quality of a computer's output depended on two things: the quality of the program and the quality of the data. Both had to be correct for the results to be correct. Relatively few people worked on the programs; more people worked on the data being fed into computers. 

This was the third phase of our relationship with computers. We recognized that their output was based on the input. We began to check our input data. We began to select sources for our data based on the quality of the data. We even invented a saying: "Garbage in yields garbage out".

That was the 1960s.

Fast-forward to the 2020s. Look carefully at our relationship with AI and see how it matches that first phase of the 1960s relationship with computers. AI is the shiny new thing. We adore it. We trust it.

We don't recognize that it is a tool, and like any tool it is good at some things and not others. We don't recognize that the quality of its output depends on the quality of its input.

We build large language models and train them on any data that we can find. We don't curate the data. We don't ensure that it is correct.

The rule from the 1960s still holds. Garbage in yields garbage out. We have to re-learn that rule.


Tuesday, April 8, 2025

Memory-safe programming needs a simple language

Rust is the pre-eminent language for memory-safe programming. It also struggles to gain acceptance.

But it's not Rust alone that struggles, it is memory-safe techniques that struggles to gain acceptance. If we work from the assumption that memory-safe programming is a good thing (it is), then we face the question of how to ease the acceptance of memory-safe techniques (and memory-safe programming languages). 

Perhaps we could learn from the history of earlier paradigm shifts in programming languages. The adoption of memory-safe programming techniques strikes me as similar to the adoption of structured programming techniques, and the adoption of object-oriented programming. Structured programming was first implemented in PL/I by IBM, and saw only some acceptance in the IBM community. Object-oriented programming was implemented in Simula (in the 1960s, about the same time as PL/I) but became popular with the Java programming language in the 1990s.

There were several reasons not to use PL/I. It was a large and complex language, and difficult to learn. (This was before the days of internet and web pages, so no online tutorials). It was expensive, as you had to buy the compiler and pay for annual support. (And the executables it produced tended to be large and slow.) The reluctance to use PL/I meant a delay for structured programming.

Structured programming eventually did gain acceptance, though not because of PL/I. It was a different language that people used to learn the techniques of structured programming. That language was Pascal.

Pascal was (and in its original form, still is) a simple language. It was (is) a teaching language, specifically for structured programming. It was developed in an academic environment, not a commercial one. It was readily available for the personal computers of the day, in different implementations. Books were readily available. It was adopted by schools (colleges mostly) as an elective and later a required class.

Pascal in its original form was not suitable for commercial applications. Variants of Pascal were developed that did have features to support large applications; Turbo Pascal and Delphi (an object-oriented variant) were the most well-known. Other languages were developed to handle large, commercial applications and eventually surpassed Pascal in popularity. Yet Pascal was a success because it made people aware and comfortable with structured programming.

Structured programming succeeded because Pascal, a small and limited language was readily available and easy to learn.

Object-oriented programming followed a similar path. The early programming languages were difficult to learn, and not always suitable for large-scale applications. Simula, C++, and Objective-C showed that object-oriented programming was possible, given a large effort over a (relatively) long time.

Object-oriented programming did succeed, and like structured programming, its success was due to a smaller and simpler programming language: Java.

Now let's come back to memory-safe programming.

We can think of Rust as the PL/I of memory-safe programming. Rust demonstrates that it can be done, but it is a large language and learning Rust requires a lot of time and effort.

To follow the paths of structured programming and object-oriented programming, memory-safe programming needs a small, easy-to-learn language. I don't see such a language. Right now, it is either Rust or nothing, and that delays the acceptance of memory-safe programming.

I'm not suggesting that we abandon Rust in favor of a small (and impractical for business) programming language. I'm suggesting that we use a small and simple programming language to teach the concepts of memory-safe programming, and then let people move on to Rust. It is the path that structured programming followed. It is the path that object-oriented programming followed. It seems to work.


Wednesday, January 8, 2025

The missing conversation about AI

For Artificial Intelligence (AI), -- or at least the latest fad that we call "AI" -- I've seen lots of announcements, lots of articles, lots of discussions, and lots of advertisements. All of them -- and I do mean "all" -- fall into the category of "hype". I have yet to see a serious discussion or article on AI.

Here's why:

In business -- and in almost every organization -- there are four dimensions for serious discussions. Those dimensions are: money, time, risk, and politics. (Politics internal to the organization, or possible with external suppliers or customers; not the national-level politics.)

Businesses don't care if an application is written in Java or C# or Rust. They *do* care that the application is delivered on time, that the development cost was reasonably close to the estimated cost, and that the application runs as expected with no ill effects. Conversations about C++ and Rust are not about the languages but about the risks of applications written in those languages. Converting from C++ to Rust is about the cost of conversion, the time it takes, opportunities lost during the conversion, and reduction of risk due to memory leaks, invalid access, and other exploits. The serious discussion ignores the issues of syntax and IDE support (unless one can tie them to money, time, or risk).

With AI, I have not seen a serious discussion about money, for either the cost to implement AI or the reduction in expenditures, other than speculation. I have not seen anyone list the time it took to implement AI with any degree of success. I have yet to see any articles or discussions about the risks of AI and how AI can provide incorrect information that seems, at first glance, quite reasonable.

These are the conversations about AI that we need to have. Without them, AI is merely a shiny new thing that has no clearly understood benefits and no place in our strategies or tactics. Without them, we do not understand the true costs to implement AI and how to decide when and where to implement it. Without them, we do not understand the risks and how to mitigate them.

The first rule of investment is: If you don't understand an investment instrument, then don't invest in it.

The first rule of business management is: If you don't understand a technology (how it can help you, what it costs, and its risks), then don't implement it. (Other than small, controlled research projects to learn about it.)

It seems to me that we don't understand AI, at least not well enough to use it for serious tasks.

Sunday, December 15, 2024

All the Windows 10 PCs

Microsoft's Windows 11 is not compatible with much of the existing PCs in the world. Microsoft gave no reasons for such incompatibility, but we can deduce that by specifying a certain level of hardware (processor and memory, mostly) Microsoft was able to implement certain features for security.

Regardless of the reason, a lot of PCs could not move to Windows 11, and therefore stayed on Windows 10. Soon, support for Windows 10 will stop, and those PCs will not get updates -- not even security updates. (Microsoft does offer extended support for a small fee.)

What's going to happen to all of those Windows 10 PCs? Microsoft recommends that you upgrade to Windows 11, and if that is not possible, replace (or recycle) your PC. Here's what I think will happen.

A large number of Windows 10 PCs (perhaps the majority) will stay on Windows 10. People will continue to use the PC, with Windows 10, to do their normal tasks. They won't get security updates, and that they will be okay with that.

Some number of Windows 10 PCs will be replaced. I suspect that this number (as a percentage of Windows PCs) is small. The people who want Windows 11 already have it. A few people may be waiting for the "right time" to upgrade to Windows 11, so they will replace their PCs.

Some number of Windows 10 PCs will be converted to Linux. This may be a smaller percentage than either of the "stay on Windows 10" or "replace" crowds.

I should point out that many PCs that are replaced are then sold to people who resell them. Some will physically destroy the PC, but others simply reformat the disk (or replace the disk) and resell the PC, either with Windows or with Linux. Thus, a PC that is "replaced" can continue its life as a Linux PC.

All in all, the decision by Microsoft to make some PCs obsolete (sort of) will lead to an increase in the number of PCs running Linux.

For me, this decision is personal. I have an old-ish HP laptop which runs Windows 10. It won't run Windows 11 -- even with Microsoft loosening the requirements for Windows 11. I have a decision: keep Windows 10, or switch to Linux. (I like the laptop and want to keep using it.)

Keeping Windows 10 is easy, but offers little benefit. I use Windows for few tasks (most of my work is in Linux) and there are only two items that require Windows: remote access to the office, and administering some old Apple Time Capsules and Airports.

My other option is to convert it to Linux. Conversion is also easy -- I've installed Linux on a number of other PCs. Once converted, I may need to use WINE to run the Apple Airport administration program. (Or I may simply replace the Apple Time Capsules and Airports with modern file servers and routers.) Access to the office isn't that important. The office supplies me with an official PC for access; my personal Windows PC is a back-up method when the official PC fails. (Which it has not done for as long as I can remember.)

So I think I will take up Microsoft's suggestion to get off of Windows 10. But it won't be to go to Windows 11. I have another PC running Windows 11; I don't need two.

Wednesday, September 25, 2024

Back to the Office

Amazon.com is the latest in a series of companies to insist that employees Return-To-Office (RTO).

Some claim that Amazon's motives (and by extension, any company that requires employees to work in the office) is really a means of reducing their workforce. The idea is that employees would rather leave the company than work in the office, and enforcing office-based work is a convenient way to get employees to leave. (Here, "convenient" means "without layoffs".)

I suspect that Amazon (and other companies) are not using RTO as a means to reduce their workforce. It may avoid severance payments and the publicity of layoffs, but it holds other risks. One risk is that the "wrong" number of employees may terminate their employment, either too many or too few. Another risk is that the "wrong" employees may leave; high performers may pursue other opportunities and poor performers may stay. It also selects employees based on compliance (those who stay are the ones who will follow orders) while the independent and confident individuals leave. That last effect is subtle, but I suspect that Amazon's management is savvy enough to understand it.

But while employers are smart enough to not use RTO as a workforce reduction technique, they are still insisting upon it. I'm not sure that they are fully thinking though the reasons they use to justify RTO. Companies have pretty much uniformly claimed that an office-based workforce is more productive, despite studies which show the opposite. Even without studies, employees can often get a feel for productivity, and they can tell that RTO does not improve it. Therefore, by claiming RTO increases productivity, management loses credibility.

That loss of credibility may be minimal now, but it will hang over management for some time. And in some time, there may be another crisis, similar to the COVID-19 pandemic, that forces companies to close offices. (That crisis may be another wave of COVID, or it may be a different virus such as Avian flu or M-pox, or it may be some other form of crisis. It may be worldwide, nationwide, or regional. But I fear that it is coming.)

Should another crisis occur, one that forces companies to close offices and ask employees to work from home, how will employees react? My guess is that some employees will reduce their productivity. The thinking is: If working in the office improves productivity (and our managers insist that it does), then working at home must reduce productivity (and therefore I will deliver what the company insists must happen).

Corporate managers may get their wish (high productivity by working in the office) although not the way that they want. By explaining the need for RTO in terms of productivity, they have set themselves up for a future loss of productivity when they need employees to work from home (or other locations).

Tuesday, September 17, 2024

Apple hardware is outpacing Apple software

Something interesting about the new iPhone 16: the software isn't ready. Specifically, the AI ("Apple Intelligence") enhancements promised by Apple are still only that: promises.

Apple can develop hardware faster than it can develop software. That's a problem.

Apple has had this problem for a while. The M1 Mac computers first showed this problem. Apple delivered the computers, with their integrated system-on-chip design and more efficient processing, but delivered no software to take advantage of that processing.

It may be that Apple cares little for software. They sell computers -- hardware -- and not software. And it appears that Apple has "outsourced" the development of applications: it relies on third parties to build applications for Macs and iPhones. Oh, Apple delivers some core applications, such as utilities to configure the device, the App Store to install apps, and low-powered applications such as Pages and Numbers. But there is little beyond that.

Apple's software development focusses on the operating system and features for the device: MacOS and Pages for the Mac, iPadOS and Stage Manager for the iPad, iOS and Facetime and Maps for the iPhone. Apple builds no database systems, has lukewarm support and enhancements for the Xcode IDE, and few apps for the iPhone.

I suspect that Apple's ability to develop software has atrophied. Apple has concentrated its efforts on hardware (and done rather well) but has lost its way with software.

That explains the delay for Apple Intelligence on the iPhone. Apple spent a lot of time and effort on the project, and (I suspect) most of that was for the hardware. Updates to iOS for the new iPhone were (probably) fairly easy and routine. But the new stuff, the thing that needed a lot of work, was Apple Intelligence.

And it's late.

Thinking about the history of Apple's software, I cannot remember a similar big feature added by Apple. There is Facetime, which seems impressive but I think the iPhone app is rather simple and a lot of the work is in the back end and scalability of that back end. Stage Manager was (is) also rather simple. Even features of the Apple Watch such as fall detection and SOS calls are not that complex. Operating systems were not that difficult: The original iOS was new, but iPadOS is a fork of that and WatchOS is a fork of it too (I think).

Apple Intelligence is a large effort, a greenfield effort (no existing code), and one that is very different from past efforts. Perhaps it is not surprising that Apple is having difficulties.

I expect that Apple Intelligence will be delivered later than expected, and will have more bugs and problems than most Apple software.

I also expect to see more defects and exploits in Apple's operating systems. Operating systems are not particularly complex (they are as complex as one makes them) but development and maintenance requires discipline. One gets that discipline through constant development and constant monitoring of that development. It requires an appreciation of the importance of the software, and I'm not sure that Apple has that mindset.

If I'm right, we will see more and more problems with Apple software. (Slowly at first, and then all at once.) Recovery will require a change in Apple's management philosophy and probably the senior management team.