Showing posts with label programming. Show all posts
Showing posts with label programming. Show all posts

Wednesday, June 16, 2021

Swift Playgrounds is just beginning

Apple made a lot of announcements recently. Something that got little attention was Swift Playgrounds for iPad. Swift Playgrounds are environments for building simple programs for children, a kindergarten version of an IDE.

I find this announcement interesting. I think that Swift Playgrounds is a first step in a revolution for coding on the Apple platform. I think that Apple is planning to make Swift Playgrounds (or something derived from it) the standard tool for developing apps.

Some of you may be thinking "Tosh! Swift Playgrounds is for children who are learning to code, not for serious developers" and you would be correct. That's what they are.

Today.

Swift Playground is, essentially, an IDE. It lets one create and run (and debug) an app. It automates these activities more than the traditional IDE, which already integrated those activities. (Hence the "I" in IDE for "integrated development environment".)

I think that Apple will expand Swift Playgrounds and make it more capable. They may create multiple layers: Swift Elementary, Swift High School, Swift University. (I'm using names from a school motif; Apple may choose different names.) Each level will be more sophisticated and more powerful than the previous, yet each level will be able to develop working apps and deploy them to Apple's App Store.

Such a plan has several ramifications.

First, Swift Playgrounds (et alia) may replace Xcode for most app developers. If one can do everything necessary to design and build an app in Swift Playgrounds, do we need Xcode?

Second, it may allow Apple to revamp its entire product line.

Once we can develop apps on an iPad, do we need the Macintosh line? Apple could easily drop the Macbook line in favor of iPads. The iPad has the same processor and can (theoretically) do anything a Macbook can do. 

Apple may also replace the iMac line with a new, very large iPad-on-a-stand. Apple could produce a 24-inch iPad, place it on a desk stand, and let users run keyboards and mice on it just like an iMac. Would it be much different from today's iPad?

Another question is: How does Apple develop new versions of its operating systems? I assume today Apple uses Mac Pros for that development work. Could Apple shift that work to super-sized iPads? The development of operating systems is possibly the most complex task for Macintosh computers. If Apple can do that work on an iPad, do they need any of the Macintosh computers?

These changes do take time. Shifting development work from Xcode to a new series of Swift schools and will take at least five years (assuming one year each to develop the additional Swift schools) and possibly longer. I'm guessing that Apple will develop all of the schools before dropping the Macintosh line, so that means that Mac computers will be with use for at least the next decade.

But it may be that Apple is planning that far ahead. It may be that Apple is planning to kill off the Macintosh computers. It may be that Apple wants its customers to use iPhones and iPads (and the Apple App Store on those devices).

Friday, April 3, 2020

Information in code

We programmers think of code as providing information, to the programmer as well as the computer. Code is converted into an executable that performs tasks for us, which is information for the computer. Code is also a description of those tasks, suitable for a programmer to read.

But information in code is not uniformly distributed. Some constructs in code provide more information than others.

Let's look at three different constructs that each provide information: data types, variable names, and
comments. They all provide information to the reader of the code, and are all useful.

The first construct is the variable type. It conveys information. It is most likely correct, and it is certainly consistent with the operations performed by the code. The variable type exists in multiple places in the code, although it may not be obvious. Anywhere the variable is used, the type is present.

Variable types prevent errors, by restricting the contents of the variable and operations on the variable. (At least in statically-typed languages.)

The second construct is the variable name. It also conveys information, but information that is different from the variable type. The variable name provides information about the intent of the variable. A good variable name describes the contents in such a way as to be useful to the programmer.

The name exists in multiple places in the code. It is (syntactically) linked to the variable -- where the variable is used, the name is used.

The third construct is the comment. A comment in code conveys information, or at least it is capable of conveying information. A well-written comment is useful to the programmer: It can inform the reader of reasons for the code. Comments can explain why the code was built in a particular way. No other construct (in code) can provide this information.

A comment exists in only one place in the code. It does not appear in multiple places, like variable types or variable names. Thus, the placement of a comment is important.

Comments may be incorrect. (They can be wrong from the start, or they can be correct and then left unchanged as the associated code is changed.)

Notice that all three of these elements provide value to the programmer. Each provides some value. Together they provide a comprehensive set of information: reasons, purpose, and constraints.

All three of these elements are important when writing a program. I won't say "necessary", as some programs can be written with expressive data types and descriptive variable names, and no comments. But any program beyond the trivial will benefit from comments.

Also notice that these three elements are different. Selecting the type of variable is mostly a technical decision, with clear right-and-wrong answers. Choosing a name for a variable is often difficult -- although can be easy when a variable's contents corresponds to a real-world concept. Composing a comment to explain the reason for decisions requires an understanding of that decision and the skill to convey that decision in clear (and concise) language.

Effective programmers will use all of these constructs (variable types, variable names, and comments). They will develop skills for selecting the right data types, for assigning descriptive variable names, and they will write comments that are helpful to themselves and other programmers.

Wednesday, March 27, 2019

Something new for programming

One of my recent side projects involved R and R Studio. R is a programming language, an interpreted language with powerful data manipulation capabilities.

I am not impressed with R and I am quite disappointed with R Studio. I have ranted about them in a previous post. But in my ... excitement ... of R and R Studio, I missed the fact that we have something new in programming.

That something new is a new form of IDE, one that has several features:
  • on-line (cloud-based)
  • mixes code and documentation
  • immediate display of output
  • can share the code, document, and results
I call this new model the 'document, code, results, share' model. I suppose we could abbreviate it to 'DCRS', but even that short form seems a mouthful. It may be better to stick to "online IDE".

R Studio has a desktop version, which you install and run locally. It also has a cloud-based version -- all you need is a browser, an internet connection, and an account. The online version looks exactly like the desktop version -- something that I think will change as the folks at R Studio add features.

R Studio's puts code and documentation into the same file. R Studio uses a variant of Markdown language (named 'Rmd').

The concept of comments in code is not new. Comments are usually short text items that are denoted with special markers ('//' in C++ and '#' in many languages). The model has always been: code contains comments and the comments are denoted by specific characters or sequences.

Rmd inverts that model: You write a document and denote the code with special markers ('$$' for TeX and '```' for code). Instead of comments (small documents) in your code, you have code in your document.

R Studio runs your code -- all of it or a section that you specify -- and displays the results as part of your document. It is smart enough to pick through the document and identify the code.

The concept of code and documentation in one file is not exclusive to R Studio. There are other tools that do the same thing: Jupyter notebooks, Mozilla's Iodide, and Mathematica (possibly the oldest of the lot). Each allow for text and code, with output. Each also allow for sharing.

At a high level, these online IDEs do the same thing: Create a document, add code, see the results, and share.

Over the years, we've shared code through various means: physical media (punch cards, paper tape, magnetic tape, floppy disk), shared storage locations (network disks), and version-control repositories (CVS, Subversion, Github). All of these methods required some effort. The new online-IDEs reduce that effort; no need to attach files to e-mail, just send a link.

There are a few major inflection points in software development, and I believe that this is one of them. I expect the concept of mixing text and code and results will become popular. I expect the notion of sharing projects (the text, the code, the results) will become popular.

I don't expect all programs (or all programmers) to move to this model. Large systems, especially those with hard performance requirements, will stay in the traditional compile-deploy-run model with separate documentation.

I see this new model of document-code-results as a new form of programming, one that will open new areas. The document-code-results combination is a good match for sharing work and results, and is close in concept to academic and scientific journals (which contain text, analysis, and results of that analysis).

Programming languages have become powerful, and that supports this new model. A Fortran program for simulating water in a pipe required eight to ten pages; the Matlab language can perform the same work in roughly half a page. Modern languages are more concise and can present their ideas without the overhead of earlier computer languages. A small snippet of code is enough to convey a complex study. This makes them suitable for analysis and especially suitable for sharing code.

It won't be traditional programmers who flock to the document-code-results-share model. Instead it will be non-programmers who can use the analysis in their regular jobs.

The online IDE supports a project with these characteristics:
  • small code
  • multiple people
  • output is easily visualizable
  • sharing and enlightenment, not production
We've seen kind of change before. It happened with the early microcomputers and first PCs, when "civilians" (that is, people other than professional programmers) bought computers and taught themselves programming in BASIC. It happened slightly later with spreadsheets, when other "civilians" bought computers and taught themselves Visicalc and Lotus 1-2-3 (and later, Excel). The "spreadsheet revolution" made computing available to many non-programmers, with impressive results. The "online IDE" could do the same.

Tuesday, July 18, 2017

A sharp edge in Ruby

I like the Ruby programming language. I've been using it for several projects, including an interpreter for the BASIC language. (The interpreter for BASIC was an excuse to do something in Ruby and learn about the language.)

My experience with Ruby has been a good one. I find that the language lets me do what I need, and often very quickly. The included classes are well-designed and include the functions I need. From time to time I do have to add some obscure capability, but those are rare.

Yet Ruby has a sharp edge to it, an aspect that can cause trouble if you fail to pay attention.

That aspect is one of its chief features: flexibility.

Let me explain.

Ruby is an object-oriented language, which means the programmer can define classes. Each class has a name, some functions, and usually some private data. You can do quite a bit with class definitions, including class variables, class instance variables, and mix-ins to implement features in multiple classes.

You can even modify existing classes, simply by declaring the same class name and defining new functions. Ruby accepts a second definition of a class and merges it into the first definition, quietly and efficiently. And that's the sharp edge.

This "sharp edge" cut me when I wasn't expecting it. I was working on my BASIC interpreter, and had just finished a class called "Matrix", which implemented matrix operations within the language. My next enhancement was for array operations (a matrix being a two-dimensional structure and an array being a one-dimensional structure).

I defined a class called "Array" and defined some functions, including a "to_s" function. (The name "to_s" is the Ruby equivalent of "ToString()" or "to_string()" in other languages.)

And my code behaved oddly. Existing functions, having nothing to do with arrays or my Array class, broke.

Experienced Ruby programmers are probably chuckling at this description, knowing the problem.

Ruby has its own Array class, and my Array class was not a new class but a modification of the existing, built-in class named "Array". My program, in actuality, was quite different from my imagined idea. When I defined the function "to_s" in "my" Array class, I was actually overwriting the existing "to_s" function in the Ruby-supplied Array class. And that happened quietly and efficiently -- no warning, no error, no information message.

Part of this problem is my fault. I was not on my guard against such a problem. But part of the problem, I believe, is Ruby's -- specifically the design of Ruby. Letting one easily modify an existing class, with no warning, is dangerous. And I say this not simply due to my background with languages that use static checking.

My error aside, I can think of two situations in which this can be a problem. The first is when a new version of the Ruby language (and its system libraries) are released. Are there new classes defined in the libraries? Could the names of those classes duplicate any names I have used in my project? For example, will Ruby one day come with a class named "Matrix"? If it does, it will collide with my class named "Matrix". How will I know that there is a duplicate name?

The second situation is on a project with multiple developers. What happens if two developers create classes with the same name? Will they know? Or will they have to wait for something "weird" to happen?

Ruby has some mechanisms to prevent this problem. One can use namespaces within the Ruby language, to prevent such name conflicts. A simple grep of the code, looking for "class [A-Z][\w]" and then a sort will identify duplicate names. But these solutions require discipline and will -- they don't come "for free".

As I said earlier, this is a sharp edge to Ruby. Is it a defect? No, I think this is the expected behavior for the language. It's not a defect. But it is an aspect of the language, and one that may limit the practicality of Ruby on large applications.

I started this blog with the statement that I like Ruby. I still like Ruby. It has a sharp edge (like all useful tools) and I think that we should be aware of it.

Sunday, June 14, 2015

Data services are more flexible than files

Data services provide data. So do files. But the two are very different.

In the classic PC world ("classic" meaning desktop applications), the primary storage mechanism is the file. A file is, at its core, a bunch of bytes. Not just a random collection of bytes, but a meaningful collection. That collection could be a text file, a document, a spreadsheet, or any one of a number of possibilities.

In the cloud world, the primary storage mechanism is the data service. That could be an SQL database, a NoSQL database, or a web service (a data service). A data service provides a collection of values, not a collection of bytes.

Data services are active things. They can perform operations. A data service is much like a query in an SQL database. (One may think of SQL as a data service, if one likes.) You can specify a subset of the data (either columns or rows, or both), the sequence in which the data appears (again, either columns or rows, or both), and the format of the data. For sophisticated services, you can collect data from multiple sources.

Data services are much more flexible and powerful than files.

But that's not what is interesting about data services.

What is interesting about data services is the mindset of the programmer.

When a programmer is working with data files, he must think about what he needs, what is in the file, and how to extract what he needs from the file. The file may have extra data (unwanted data rows, or perhaps undesired headings and footings). The file may have extra columns of data. The data may be in a sequence different from the desired sequence. The data may be in a format that is different from what is needed.

The programmer must compensate for all of these things, and write code to handle the unwanted data or the improper formats. Working with files means writing code to match the file.

In contrast, data services -- well-designed data services -- can format the data, filter the data, and clean the data for the programmer. Data services have capabilities that files do not; they are active and can perform operations.

A programmer using files must think "what does the file provide, and how can I convert it to what I need?"; a programmer using data services thinks "what do I need?".

With data services, the programmer can think less about what is available and think more about what has to be done with the data. If you're a programmer or a manager, you understand how this change makes programmers more efficient.

If you're writing code or managing projects, think about data services. Even outside of the cloud, data services can reduce the programming effort.

Tuesday, March 24, 2015

Why Alan Turing is a hero to programmers

We in the programming world have few heroes. That is, we have few heroes who are programmers. There are people such as Bill Gates and Steve Jobs, but they were businessmen and visionaries, but not programmers.

Yet we do have a few heroes, a few programmers who have risen above the crowd. Here is a short list:

Grace Hopper Serving in the Navy, she created and advocated the idea of a compiler. At the time, computers were programmed either by wire (physical wires attached to plug-boards) or with assembly language. A compiler, converting English-like statements into machine instructions, was a bold step.

Donald Knuth His multi-volume work The Art of Computer Programming is a comprehensive work, comprising machine design, assemblers, compilers, searching, sorting, and the limits of digital computation. He also created the TeX language and the notion of "Literate Programming".

Brian Kernighan and Dennis Ritchie Created the C language.

Larry Wall Creator of the Perl language; also the creator of the 'patch' program used to apply changes across systems.

Fred Brooks The chief designer of IBM's operating system for the System/360 computers, his book The Mythical Man-Month has several interesting observations on the teams that construct software.

Gerald Weinberg Known for his books on system analysis and design, I find his work The Psychology of Computer Programming much more useful to programmers and program team managers.

All of these folks are (were) smart, creative, and contributors to the programming art. Yet one has a special place in this list: Alan Turing.

Alan Turing, subject of the recent movie "The Imitation Game", has made significant contributions to the programming craft. They are:

  • Code-breaking in World War II with the Bombe computer
  • The Turing Test
  • Turing Machines
  • The Halting Problem

All of these are impressive. Turing was many things: mathematician, biologist, philosopher, logician.

Of all of his accomplishments, I consider his proof of the halting problem to be the one act that raises him above our other heroes. His work with code-breaking clearly makes him a programmer. His idea of the Turing Test set clear (if perhaps unreachable) goals for artificial intelligence.

The notion of Turing machines, with the corresponding notion that one Turing machine can emulate another Turing machine is a brilliant insight, and enough to make him a hero above others.

Yet it is the halting problem, or more specifically Turing's proof of the halting problem (he proved that one cannot tell, in advance, that a program of sufficient complexity would be guaranteed to stop) is what pushes him across the line. The proof of the halting problem connects programming to mathematics.

Mathematics, of course, has been with us for centuries. It is as old as counting, and has rigorous and durable proofs. Euclid's work is two millennia old, yet still used today. It is these proofs that make mathematics special - math is the "Queen of the Sciences" and used by all other branches of knowledge.

Mathematics is not without problems. There is a proof called the Incompleteness Theorem, which states that any system of axioms (rules) that includes integers, there exist theorems which are known to be true yet cannot be proved with that system of axioms. (The Incompleteness Theorem also states that should you add an axiom to the system to allow such a proof, you will find that there are other theorems which are known to be true but not provable in the new system.)

That sounds a lot like the halting problem.

But the Incompleteness Theorem is the result of thousands of years of study, and computing is young; we have had digital computing for less than a century. Perhaps we can find another corresponding mathematical surprise, one that occurred earlier in the history of mathematics.

Perhaps Turing's proof of the halting problem is closer to the discovery of irrational numbers. The Greek philosophers were enchanted with mathematics, bot geometry and arithmetic. The reduction of physical phenomena to numbers was a specialty for them, and they loved integers and ratios. They were quite surprised (and by some accounts, horrified) to learn that numbers such as the square root of two could not be represented by an integer or a ratio of integers.

That kind of problem sounds close to our halting problem. For the Greeks, irrational numbers broke their nice, neat world of integers. For us, the halting problem breaks the nice, neat world of predictable programs. (To be clear, most of our programs do run "to completion" and halt. The theory states that we cannot know in advance that they will halt. We simply run them and find out.)

Turing gave us the proof of the halting problem. In doing so, he connected programming to mathematics, and (so I argue) us early programmers to the early mathematicians.

And that is why I consider Alan Turing a hero above others.

Wednesday, June 11, 2014

Learning to program, without objects

Programming is hard.

Object-oriented programming is really hard.

Plain (non-object-oriented) programming has the concepts of statements, sequences, loops, comparisons, boolean logic, variables, variable types (text and numeric), input, output, syntax, editing, and execution. That's a lot to comprehend.

Object-oriented programming has all of that, plus classes, encapsulation, access, inheritance, and polymorphism.

Somewhere in between the two is the concept of modules and multi-module programs, structured programming, subroutines, user-defined types (structs), and debugging.

For novices, the first steps of programming (plain, non-object-oriented programming) are daunting. Learning to program in BASIC was hard. (The challenge was in organizing data into small, discrete chunks and processes into small, discrete steps.)

I think that the days of an object-oriented programming language as the "first language to learn" are over. We will not be teaching C# or Java as the introduction to programming. (And certainly not C++.)

The introduction to programming will be with languages that are not necessarily object-oriented: Python or Ruby. Both are, technically, object-oriented programming languages, supporting classes, inheritance, and polymorphism. But you don't have to use those features.

C# and Java, in contrast, force one to learn about classes from the start. One cannot write a program without classes. Even the simple "Hello, world!" program in C# or Java requires a class to hold main() .

Python and Ruby can get by with a simple

print "Hello, world"

and be done with it.

Real object-oriented programs (ones that include a class hierarchy and inheritance and polymorphism) require a bunch of types (at least two, probably three) and operations complex enough to necessitate the need for so many types. The canonical examples of drawing shapes or simulating an ATM are complex enough to warrant object-oriented code.

A true object-oriented program has a minimum level of complexity.

When learning the art of programming, do we want to start with that level of complexity?

Let us divide programming into two semesters. The first semester can be devoted to plain programming. The second semester can introduce object-oriented programming. I think that the "basics" of plain programming are enough for a single semester. I also think that one must be comfortable with those basics before one starts with object-oriented programming.

Monday, May 19, 2014

The shift to cloud is bigger than we think

We've been using operating systems for decades. While they have changed over the years, they have offered a consistent set of features: time-slicing of the processor, memory allocation and management, device control, file systems, and interrupt handling.

Our programs ran "under" (or "on top of") an operating system. Our programs were also fussy -- they would run on one operating system and only that operating system. (I'm ignoring the various emulators that have come and gone over time.)

The operating system was the "target", it was the "core", it was the sun around which our programs orbited.

So it is rather interesting that the shift to cloud computing is also a shift away from operating systems.

Not that cloud computing is doing away with operating systems. Cloud computing coordinates the activities of multiple, usually virtualized, systems, and those systems run operating systems. What changes in cloud computing is the programming target.

Instead of a single computer, a cloud system is composed of multiple systems: web servers, database servers, and message queues, typically. While those servers and queues must run on computers (with operating systems), we don't care about them. We don't insist that they run any specific operating system (or even use a specific processor). We care only that they provide the necessary services.

In cloud computing, the notion of "operating system" fades into the infrastructure.

As cloud programmers, we don't care if our web server is running Windows. Nor do we care if it is running Linux. (The system administrators do care, but I am taking a programmer-centric view.) We don't care which operating system manages our message queues.

The level of abstraction for programmers has moved from operating system to web services.

That is a significant change. It means that programmers can focus on a higher level of work.

Hardware-tuned programming languages like C and C++ will become less important. Not completely forgotten, but used only by the specialists. Languages such as Python, Ruby, and Java will be popular.

Operating systems will be less important. They will be ignored by the application level programmers. The system architects and sysadmins, who design and maintain the cloud infrastructure, will care a lot about operating systems. But they will be a minority.

The change to services is perhaps not surprising. We long ago shifted away from processor-specific code, burying they work in our compilers. COBOL and FORTRAN, the earliest languages, were designed to run on different processors. Microsoft insulated us from the Windows API with MFC and later the .NET framework. Java separated us from the processor with its virtual machine. Now we take the next step and bury the operating system inside of web services.

Operating systems won't go away. But they will become less visible, less important in conversations and strategic plans. They will be more of a commodity and less of a strategic advantage.

Tuesday, March 4, 2014

After Big Data comes Big Computing

The history of computing is a history of tinkering and revision. We excel at developing techniques to handle new challenges.

Consider the history of programming:

Tabulating machines

  • plug-boards with wires

Von Neumann architecture (mainframes)

  • machine language
  • assembly language
  • compilers (FORTRAN and COBOL)
  • interpreters (BASIC) and timeshare systems

The PC revolution (the IBM PC)

  • assembly language
  • Microsoft BASIC

The Windows age

  • Object-oriented programming
  • Event-driven programming
  • Visual Basic

Virtual machines

  • UCSD p-System
  • Java and the JVM

Dynamic languages

  • Perl
  • Python
  • Ruby
  • Javascript

This (severely abridged) list of hardware and programming styles shows how we change our technology. Our progress is not a smooth advance from one level to the present, but a series of jumps, some of them quite large. It was a large jump from plug-boards to memory-resident programs. It was another large jump to an assembler. One can argue that later jumps were larger or smaller, but those arguments are not important to the basic idea.

Notice that we do not know where things are going. We do not see the entire chain up front. In the 1950s, we did not know that we would end up here (in 2014) with dynamic languages and cloud computing. Often we cannot see the next step until it is upon us and only the best of visionaries can see past it.

Big Data is such a jump, enabled by cheap storage and cloud computing. That change in technology is upon us.

Big Data is the acquisition and storage (and use) of large quantities of data. Not just "lots of data" but mind-boggling quantities of data. Data that makes our current "very large" databases look small and puny. Data that contains not only financial transactions but server logs, e-mails, security videos, medical records, and sensor readings from just about any kind of device. (The sensor readings may be from building sensors for temperature, from vehicles for position and speed and engine performance, from packages in transit, from assembly lines, from gardens and parks for temperature and humidity, ... the list is endless.)

But what happens once we acquire and store these mind-boggling heaps of data?

The obvious solution is to do something with it. And we are doing something with it; we use tools like Hadoop to process and analyze and visualize it.

I think Hadoop (and its brethren) are a good start. We're at the dawn of the "Big Data Age", and we don't really know what we want -- in terms of analyses and tools. We have some tools, and they seem okay.

But this is just the dawn of the "Big Data Age". I think we will develop new techniques and tools to analyze our data. And, I suspect those tools and techniques will require lots of computation. So much computation that someone will coin the term "Big Computing" to represent the use of mind-boggling amounts of computing power.

Big Computing seems a natural follow-on to Big Data. And just as we have developed languages to handle new programming challenges, we will develop new languages for Big Computing.

We have two hints for programming in the era of Big Computing. One hint is cloud computing, with its ability to scale up as we need more power. We've already seen that programs for the cloud have a different organization than "classic" programs. Cloud programs use small modules connected by message queues. The modules hold no state, which allows the system to route transactions to any available module.

The other hint is at the small end of the computing world, at the chip level. Here we see advances in processor design: more cores, more caching, more processing. The GreenArrays GA144 is a chip that contains 144 computers -- not cores, but computers. This is another contender for Big Computing.

I'm not sure what "Big Computing" and its programming will look like, but I am confident that they will be interesting!

Thursday, May 2, 2013

Our fickleness on the important aspects of programs

Over time, we have changed our desire in program attributes. If we divide the IT age into four eras, we can see this change. Let's consider the four eras to be mainframe, PC, web, and mobile/cloud. These four eras used different technology and different languages, and praised different accomplishments.

In the mainframe era, we focussed on raw efficiency. We measured CPU usage, memory usage, and disk usage. We strove to have enough CPU, memory, and disk, with some to spare but not too much. Hardware was expensive, and too much spare capacity meant that you were paying for more than you needed.

In the PC era we focussed not on efficiency but on user-friendliness. We built applications with help screens and menus. We didn't care too much about efficiency -- many people left PCs powered on overnight, with no "jobs" running.

With web applications, we focussed on globalization, with efficiency as a sub-goal. The big effort was in the delivery of an application to a large quantity of users. This meant translation into multiple languages, the "internationalization" of an application, support for multiple browsers, and support for multiple time zones. But we didn't want to overload our servers, either, so early Perl CGI applications were quickly converted to C or other languages for performance.

With applications for mobile/cloud, we desire two aspects: For mobile apps (that is, the 'UI' portion), we want something easier than "user-friendly". The operation of an app must not merely be simple, it must be obvious. For cloud apps (that is, the server portion), we want scalability. An app must not be monolithic, but assembled from collaborative components.

The objectives for systems vary from era to era. Performance was a highly measured aspect in the mainframe era, and almost ignored in the PC era.

The shift from one era to another may be difficult for practitioners. Programmers in one era may be trained to "optimize" their code for the dominant aspect. (In the mainframe era, they would optimize for performance.) A succeeding era would demand other aspects in their systems, and programmers may not be aware of the change. Thus, a highly-praised mainframe programmer with excellent skills at algorithm design, when transferred to a PC project may find that his skills are not desired or recognized. His code may receive a poor review, since the expectation for PC systems is "user friendly" and his skills from mainframe programming do not provide that aspect.

Similarly, a skilled PC programmer may have difficulties when moving to web or mobile/cloud systems. The expectations for user interface, architecture, and efficiency are quite different.

Practitioners who start with a later era (for example, the 'young turks' starting with mobile/cloud) may find it difficult to comprehend the reasoning of programmers from an earlier era. Why do mainframe programmers care about the order of mathematical operations? Why do PC programmers care so much about in-memory data structures, to the point of writing their own?

The answers are that, at the time, these were important aspects of programs. They were pounded into the programmers of earlier eras, to a degree that those programmers design their code without thinking about these optimizations.

Experienced programmers must look at the new system designs and the context of those designs. Mobile/cloud needs scalability, and therefore needs collaborative components. The monolithic designs that optimized memory usage are unsuitable to the new environment. Experienced programmers must recognize their learned biases and discard those that are not useful in the new era. (Perhaps we can consider this a problem of cache invalidation.)

Younger programmers would benefit from a deeper understanding of the earlier eras. Art students learn study the conditions (and politics) of the old masters. Architects study the buildings of the Greeks, Romans, and medieval kingdoms. Programmers familiar with the latest era, and only the latest era, will have a difficult time communicating with programmers of earlier eras.

Each era has objectives and constraints. Learn about those objectives and constraints, and you will find a deeper appreciation of programs and a greater ability to communicate with other programmers.

Tuesday, August 28, 2012

The deception of C++'s 'continue' and 'break'

Pick up any C++ reference book, visit any C++ web site, and you will see that the 'continue' and 'break' keywords are grouped with the loop constructs. In many ways it makes sense, since you can use these keywords with only those constructs.

But the more I think about 'continue' and 'break', the more I realize that they are not loop constructs. Yes, they are closely associated with 'while' and 'for' and 'case' statements, but they are not really loop constructs.

Instead, 'continue' and 'break' are variations on a different construct: the 'goto' keyword.

The 'continue' and 'break' statements in loops bypass blocks of code. 'continue' transfers control to the end of the loop block and allows the next iteration to continue. 'break' transfers control to the end of the loop block and forces the loop to end (allowing code after the loop to execute). These are not loop operations but 'transfer of control' operations, or 'goto' operations.

Now, modern programmers have declared that 'goto' operations are evil and must never, ever be used. Therefore, 'continue' and 'break', as 'goto' in disguise, are evil and must never, ever be used.

(The 'break' keyword can be used in 'switch/case' statements, however. In that context, a 'goto' is exactly the construct that we want.)

Back to 'continue' and 'break'.

If 'continue' and 'break' are merely cloaked forms of 'goto', then we should strive to avoid their use. We should seek out the use of 'continue' and 'break' in loops and re-factor the code to remove them.

I will be looking at code in this light, and searching for the 'continue' and 'break' keywords. When working on systems, I will make their removal one of my metrics for the improvement of the code.

Thursday, April 26, 2012

All the stuff!

A recent visit to the dentist got me observing and thinking. My thought was "Wow, look at all the stuff!"

The difference between the practice of dentistry and the practice of programming can be measured by the difference in stuff.

Compared to a developer (programmer, geek, whatever title you want), a dentist is burdened with a large quantity of stuff, much of it expensive.

First, the dentist (and this was a single practitioner, not a collection of dentists) has an office. That is, a physical space in a building. Not a randomly-assigned space as in a co-working office, but a permanently assigned space. A dentist needs a permanently assigned space, because the practice of dentistry requires a bunch of things.

Those things include an operating room (one can argue that this is an office within the office) with the specialized dentist chair, the specialized lamp, X-ray emitter, tools, tray for tools, mobile stand to hold the tray for the tools, sink and counter-top, hose with suction, water squirter, and other items I cannot readily identify.

The larger office has a receptionist area with receptionists (two of them!), patient files, folders (to hold the files), cabinets (to hold the folders), computers, printers, phones, chairs and desks, and general office supplies. It also has a waiting room with chairs, tables, lamps, potted plants, a television, magazines, wastebaskets, and Muzak.

Programming, on the other hand, needs the following items: a laptop computer with a certain amount of processing power and storage, an internet connection, and the rest of the internet. And a place to sit with a connection for power. Maybe a cell phone.

Now, the "rest of the internet" can hold a lot of stuff. Probably more than the few items in the dentist office. And even if we limit that set to the things that are needed by a programmer (editor, compiler, a few other tools, Twitter and Skype, and a browser), the number of items for each practice may be about the same.

But the programmer, in my mind (and I will admit that I am biased), has the more convenient set of stuff. It all fits in the laptop, and can be packed up and moved at a moment's notice. And programmers do not need permanent offices.

I suspect that we will achieve the "officeless office" before we achieve the "paperless office", and the move to the officeless office will occur in professions. Certain professions (probably the newer ones) will move to the officeless office. Brand-new professions may start that way. Some professions may lag, and some may never move out of their permanent offices.