Thursday, March 28, 2019

Spring cleaning

Spring is in the air! Time for a general cleaning.

An IT shop of any significant size will have old technologies. Some folks will call them "legacy applications". Other folks try not to think about them. But a responsible manager will take inventory of the technology in his (or her) shop and winnow out those that are not serving their purpose (or are posing threats).

Here are some ideas for tech to get rid of:

Perl: I have used Perl. When the alternatives were C++ and Java, Perl was great. We could write programs quickly, and they tended to be small and easy to read. (Well, sort of easy to read.)

Actually, Perl programs were often difficult to read. And they still are difficult to read.

With languages such as Python and Ruby, I'm not sure that we need Perl. (Yes, there may be a module or library that works only with Perl. But they are few.)

Recommendation: If you have no compelling reason to stay with Perl, move to Python.

Visual Basic and VB.NET: Visual Basic (the non-.NET version), is old and difficult to support. It will only become older and more difficult to support. It does not fit in with web development -- much less cloud development. VB.NET has always been a second chair to C#.

Recommendation: Migrate from VB.NET to C#. Migrate from Visual Basic to anything (except Perl).

Any version of Windows other than Windows 10: Windows 10 has been with us for years. There is no reason to hold on to Windows 8 or Windows 7 (or Windows Vista).

If you have applications that can run only on Windows 7 or Windows 8, you have an application that will eventually die.

You don't have to move to Windows 10. You can move some applications to Linux, for example. If people are using only web-based applications, you can issue them ChromeBooks or low-end Windows laptops.

Recommendation: Replace older versions of Windows with Windows 10, Linux, or Chrome OS.

CVS and Subversion: Centralized version control systems require administration, which translates into expense. Distributed version control systems often cost less to administer, once you teach people how to use them. (The transition is not always easy, and the conversion costs are not zero, but in the long run the distributed systems will cost you less.)

Recommendation: Move to git.

Everyone has old technology. The wise manager knows about it and decides when to replace it. The foolish manager ignores the old technology, and often replaces it when forced to by external events, and not at a time of his choosing.

Be a wise manager. Take inventory of your technology, assess risk, and build a plan for replacements and upgrades.

Wednesday, March 27, 2019

Something new for programming

One of my recent side projects involved R and R Studio. R is a programming language, an interpreted language with powerful data manipulation capabilities.

I am not impressed with R and I am quite disappointed with R Studio. I have ranted about them in a previous post. But in my ... excitement ... of R and R Studio, I missed the fact that we have something new in programming.

That something new is a new form of IDE, one that has several features:
  • on-line (cloud-based)
  • mixes code and documentation
  • immediate display of output
  • can share the code, document, and results
I call this new model the 'document, code, results, share' model. I suppose we could abbreviate it to 'DCRS', but even that short form seems a mouthful. It may be better to stick to "online IDE".

R Studio has a desktop version, which you install and run locally. It also has a cloud-based version -- all you need is a browser, an internet connection, and an account. The online version looks exactly like the desktop version -- something that I think will change as the folks at R Studio add features.

R Studio's puts code and documentation into the same file. R Studio uses a variant of Markdown language (named 'Rmd').

The concept of comments in code is not new. Comments are usually short text items that are denoted with special markers ('//' in C++ and '#' in many languages). The model has always been: code contains comments and the comments are denoted by specific characters or sequences.

Rmd inverts that model: You write a document and denote the code with special markers ('$$' for TeX and '```' for code). Instead of comments (small documents) in your code, you have code in your document.

R Studio runs your code -- all of it or a section that you specify -- and displays the results as part of your document. It is smart enough to pick through the document and identify the code.

The concept of code and documentation in one file is not exclusive to R Studio. There are other tools that do the same thing: Jupyter notebooks, Mozilla's Iodide, and Mathematica (possibly the oldest of the lot). Each allow for text and code, with output. Each also allow for sharing.

At a high level, these online IDEs do the same thing: Create a document, add code, see the results, and share.

Over the years, we've shared code through various means: physical media (punch cards, paper tape, magnetic tape, floppy disk), shared storage locations (network disks), and version-control repositories (CVS, Subversion, Github). All of these methods required some effort. The new online-IDEs reduce that effort; no need to attach files to e-mail, just send a link.

There are a few major inflection points in software development, and I believe that this is one of them. I expect the concept of mixing text and code and results will become popular. I expect the notion of sharing projects (the text, the code, the results) will become popular.

I don't expect all programs (or all programmers) to move to this model. Large systems, especially those with hard performance requirements, will stay in the traditional compile-deploy-run model with separate documentation.

I see this new model of document-code-results as a new form of programming, one that will open new areas. The document-code-results combination is a good match for sharing work and results, and is close in concept to academic and scientific journals (which contain text, analysis, and results of that analysis).

Programming languages have become powerful, and that supports this new model. A Fortran program for simulating water in a pipe required eight to ten pages; the Matlab language can perform the same work in roughly half a page. Modern languages are more concise and can present their ideas without the overhead of earlier computer languages. A small snippet of code is enough to convey a complex study. This makes them suitable for analysis and especially suitable for sharing code.

It won't be traditional programmers who flock to the document-code-results-share model. Instead it will be non-programmers who can use the analysis in their regular jobs.

The online IDE supports a project with these characteristics:
  • small code
  • multiple people
  • output is easily visualizable
  • sharing and enlightenment, not production
We've seen kind of change before. It happened with the early microcomputers and first PCs, when "civilians" (that is, people other than professional programmers) bought computers and taught themselves programming in BASIC. It happened slightly later with spreadsheets, when other "civilians" bought computers and taught themselves Visicalc and Lotus 1-2-3 (and later, Excel). The "spreadsheet revolution" made computing available to many non-programmers, with impressive results. The "online IDE" could do the same.

Tuesday, March 19, 2019

C++ gets serious

I'm worried that C++ is getting too ... complicated.

I am not worries that C++ is a dead language. It is not. The C++ standards committee has adopted several changes over the years, releasing new C++ standards. C++11. C++14. C++17 is the most recent. C++20 is in progress. Compiler vendors are implementing the new standards. (Microsoft has done an admirable job in their latest versions of their C++ compiler.)

But the changes are impressive -- and intimidating. Even the names of the changes are daunting:
  • contracts, with preconditions and postconditions
  • concepts
  • transactional memory
  • ranges
  • networking
  • modules
  • concurrency
  • coroutines
  • reflection
  • spaceship operator
Most of these do not mean what you think they mean (unless you have been reading the proposed standards). The spaceship operator is the familiar to anyone who has worked in Perl or Ruby. The rest may sound familiar but are quite specific in their design and use, and it is probably very different from your first guess.

Here is an example of range, which simplifies the common "iterate over a collection" loop:

int array[5] = { 1, 2, 3, 4, 5 };
for (int& x : array)
    x *= 2;

This is a nice improvement. Notice that it does not use STL iterators; this is pure C++ code.

Somewhat more complex is an implementation of the spaceship operator:

template
struct pair {
  T t;
  U u;

  auto operator<=> (pair const& rhs) const
    -> std::common_comparison_category_t<
         decltype(std::compare_3way(t, rhs.t)),
         decltype(std::compare_3way(u, rhs.u)>
  {
    if (auto cmp = std::compare_3way(t, rhs.t); cmp != 0)
        return cmp;

    return std::compare3_way(u, rhs.u);
  }
}

That code seems... not so obvious.

The non-obviousness of code doesn't end there.

Look at two functions, one for value types and one for all types (value and reference types):

For simple value types, for our two functions, we can write the following code:

std::for_each(vi.begin(), vi.end(), [](auto x) { return foo(x); });

The most generic form:

#define LIFT(foo) \
  [](auto&... x) \
    noexcept(noexcept(foo(std::forward(x)...))) \
   -> decltype(foo(std::forward(x)...)) \
  { return foo(std::forward(x)...); }

I will let you ponder that bit of "trivial" code.

Notice that the last example uses the #define macro to do its work, with '\' characters to continue the macro across multiple lines.

* * *

I have been pondering that code (and more) for some time.

- C++ is becoming more capable, but also more complex. It is now far from the "C with Classes" that was the start of C++.

- C++ is not obsolete, but it is for applications with specific needs. C++ does offer fine control over memory management and can provide predictable run-time performance, which are advantages for embedded applications. But if you don't need the specific advantages of C++, I see little reason to invest the extra effort to learn and maintain C++.

- Development work will favor other languages, mostly Java, C#, Python, JavaScript, and Go. Java and C# have become the "first choice" languages for business applications; Python has become the "first choice" for one's first language. The new features of C++, while useful for specific applications, will probably discourage the average programmer. I'm not expecting schools to teach C++ as a first language again -- ever.

- There will remain a lot of C++ code, but C++'s share of "the world of code" will become smaller. Some of this is due to systems being written in other languages. But I'm willing to bet that the total lines of code for C++ (if we could measure it) is shrinking in absolute numbers.

All of this means that C++ development will become more expensive.

There will be fewer C++ programmers. C++ is not the language taught in schools (usually) and it is not the language taught in the "intro to programming" courses. People will not learn C++ as a matter of course; only those who really want to learn it will make the effort.

C++ will be limited to the projects that need the features of C++, projects which are larger and more complex. Projects that are "simple" and "average" will use other languages. It will be the complicated projects, the projects that need high performance, the projects that need well-defined (and predictable) memory management which will use C++.

C++ will continue as a language. It will be used on the high end projects, with specific requirements and high performance. The programmers who know C++ will have to know how to work on those projects -- amateurs and dabblers will not be welcome. If you are managing projects, and you want to stay with C++, be prepared to hunt for talent and be prepared to pay.

Thursday, March 14, 2019

A Relaxed Waterfall

We're familiar with the two development methods: Waterfall and Agile. Waterfall operates in a sequence of large steps: gather requirements, design the system, build the system, test the system, and deploy the system; each step must wait for the prior step to complete before it starts. Agile uses a series of iterations that each involve specifying, implementing and testing a new feature.

Waterfall's advantage is that it promises delivery on a specific date. Agile makes no such promise, but instead promises that you can always ship whatever you have built.

Suppose there was a third method?

How about a modified version of Waterfall: the normal Waterfall but no due date -- no schedule.

This may seem a bit odd, and even nonsensical. After all, the reason people like Waterfall is the big promise of delivery on a specific date. Bear with me.

If we change Waterfall to remove the due date, we can build a very different process. The typical Waterfall project runs a number of phases (analysis, design, coding, etc.) and there is pressure to, once a phase has been completed, to never go back. One cannot go back; the schedule demands that the next phase begin. Going back from coding, say, because you find ambiguities in the requirements, means spending more time in the analysis phase and that will (most likely) delay the coding phase, which will then delay the testing phase, ... and the delays reach all the way to the delivery date.

But if we remove the delivery date, then there is no pressure of missing the delivery date! We can move back from coding to analysis, or from testing to coding, with no risk. What would that give us?

For starters, the process would be more like Agile development. Agile makes no promise about a specific delivery date, and neither does what I call the "Relaxed Waterfall" method.

A second effect is that we can now move backwards in the cycle. If we complete the first phase (Analysis) and start the second phase (Design) and then find errors or inconsistencies, we can move back to the first phase. We are under no pressure to complete the Design phase "on schedule" so we can restart the analysis and get better information.

The same holds for the shift from Design to the third phase (Coding). If we start coding and find ambiguities, we can easily jump back to Design (or even Analysis) to resolve questions and ensure a complete specification.

While Relaxed Waterfall may sound exactly like Agile, it has differences. We can divide the work into different teams, one team handling each phase. You can have a team that specializes in analysis and the documentation of requirements, a second team that specializes in design, a third team for coding, and a fourth team for testing. The advantage is that people can specialize; Agile requires that all team members know how to design, code, test, and deploy a product. For large projects the latter approach may be infeasible.

This is all speculation. I have not tried to manage a project with Relaxed Waterfall techniques. I suspect that my first attempt might fail. (But then, early attempts with traditional Waterfall failed, too. We would need practice.) And there is no proof that a project run with Relaxed Waterfall would yield a better result.

It was merely an interesting musing.

But maybe it could work.


Monday, March 4, 2019

There is no Linux desktop

Every year, Linux enthusiasts hope that the new year will be the "year of the Linux desktop", the year that Linux dethrones Microsoft Windows as the chief desktop operating system.

I have bad news for the Linux enthusiasts.

There is no Linux desktop.

More specifically, there is not one Linux desktop. Instead, there is a multitude. There are multiple Linux distributions ("distros" in jargon) and it seems that each has its own ideas about the desktop. Some emulate Microsoft Windows, in an attempt to make it easy for people to convert from Windows to Linux. Other distros do things their own (and presumably better) way. Some distros focus on low-end hardware, others focus on privacy. Some focus on forensics, and others are tailored for tinkerers.

Distributions include: Debian, Ubuntu, Mint, SuSE, Red Hat, Fedora, Arch Linux, Elementary, Tails, Kubuntu, CentOS, and more.

The plethora of distributions splits the market. No one distribution is the "gold standard". No one distribution is the leader.

Here's what I consider the big problem for Linux: The split market discourages some software vendors from entering it. If you have a new application, do you support all of the distros or just some? Which ones? How do you test all of the distros that you support? What do you do with customers who use distros that you don't support?

Compared to Linux, the choice of releasing for Windows and macOS is rather simple. Either you support Windows or you don't. (And by "Windows" I mean "Windows 10".) Either you support mac OS or you don't. (The latest version of mac OS.) Windows and macOS each provide a single platform, with a single installation method, and a single API. (Yes, I am simplifying here. Windows has multiple ways to install an application, but it is clear that Microsoft is transitioning to the Universal app.)

I see nothing to reduce the number of Linux distros, so this condition will continue. We will continue to enjoy the benefits of multiple Linux distributions, and I believe that to be good for Linux.

But it does mean that the Evil Plan to take over all desktops will have to wait.