The programming language Pascal had many good ideas. Many of those ideas have been adopted by modern programming languages. One idea that hasn't been adopted was the separation of functions and procedures.
Some definitions are in order. In Pascal, a function is a subroutine that accepts input parameters, can access variables in its scope (and containing scopes), performs some computations, and returns a value. A procedure is similar: it accepts input parameters, can access variables in its scope and containing scopes, performs calculations, and ... does not return a value.
Pascal has the notion of functions and a separate notion of procedures. A function is a function and a procedure is a procedure, and the two are different. A function can be used (in early Pascal, must be used) in an expression. It cannot stand alone.
A procedure, in contrast, is a computational step in a program. It cannot be part of an expression. It is a single statement, although it can be part of an 'if' or 'while' statement block.
Functions and procedures have different purposes, and I believe that the creators of Pascal envisioned functions to be unable to change variables outside of themselves. Procedures, I believe, were intended to change variables outside of their immediate scope. In C++, a Pascal-style function would be a function that is declared 'const', and a procedure would be a function that returns 'void'.
This arrangement is different from the C idea of functions. C combines the idea of function and procedure into a single 'function' construct. A function may be designed to return a value, or it may be designed to return nothing. A function may change variables outside of its scope, but it doesn't have to. (It may or may not have "side effects".)
In the competition among programming languages, C won big early on, and Pascal (or rather, the ideas in Pascal), have gained acceptance slowly. The C notion of function has been carried by other popular languages: C++, Java, C#, Python, Ruby, and even Go.
I remember quite clearly learning about Pascal (many years ago) and feeling that C was superior to Pascal due to its single approach. I sneered (mentally) at Pascal's split between functions and procedures.
I have come to regret those feelings, and now see the benefit of separating functions and procedures. When building (or maintaining) large-ish systems in modern languages (C++, C#, Java, Python), I have created functions that follow the function/procedure split. These languages force one to write functions -- there is no construct for a procedure -- yet I designed some functions to return values and others to not return values. The value-returning functions I made 'const' when possible, and avoided side effects. The functions with side effects I designed to not return values. In sum, I built functions and procedures, although the compiler uses only the 'function' construct.
The future may hold programming languages that provide functions and procedures as separate constructs. I'm confident that we will see languages that have these two ideas. Here's why:
First, there is a new class of programming languages called "functional languages". These include ML, Erlang, Haskell, and F#, to name a few. These functional languages use Pascal's original idea of functions as code blocks that perform a calculation with no side effects and return a value. Language designers have already re-discovered the idea of the "pure function".
Second, most ideas from Pascal have been implemented in modern languages. Bounds-checking for arrays. Structured programming. Limited conversion of values from one type to another. The separation of functions and procedures is one more of these ideas.
The distinction between functions and procedures is one more concept that Pascal got right. I expect to see it in newer languages, perhaps over the next decade. The enthusiasts of functional programming will realize that pure functions are not sufficient and that they need procedures. We'll then see variants of functional languages that include procedures, with purists holding on to procedure-less languages. I'm looking forward to the division of labor between functions and procedures; it has worked well for me in my efforts and a formal recognition will help me convey this division to other programmers.
Friday, May 17, 2019
Tuesday, April 23, 2019
Full-stack developers and the split between development and system administration
The notion of a "full stack" developer has been with us for a while, Some say it is a better way to develop and deploy systems, others take the view that it is a way for a company to build systems at lower cost. Despite their differing opinions on the value of a full stack engineer, everyone agrees on the definition: A "full stack" developer (or engineer) is a person who can "do it all" from analysis to development and testing (automated testing), from database design to web site deployment.
But here is a question: Why was there a split in functions? Why did we have separate roles for developers and system administrators? Why didn't we have combined roles from the beginning?
Well, at the very beginning of the modern computing era, we did have a single role. But things became complicated, and specialization was profitable for the providers of computers. Let's go back in time.
We're going way back in time, back before the current cloud-based, container-driven age. Back before the "old school" web age. Before the age of networked (but not internet-connected) PCs, and even before the PC era. We're going further back, before minicomputers and before commercial mainframes such as the IBM System/360.
We're going back to the dawn of modern electronic computing. This was a time before the operating system, and individuals who wanted to use a computer had to write their own code (machine code, not a high-level language such as COBOL) and those programs managed memory and manipulated input-output devices such as card readers and line printers. A program had total control of the computer -- there was no multiprocessing -- and it ran until it finished. When one programmer was finished with the computer, a second programmer could use it.
In this age, the programmer was a "full stack" developer, handling memory allocation, data structures, input and output routines, business logic. There were no databases, no web servers, and no authentication protocols, but the programmer "did it all", including scheduling time on the computer with other programmers.
Once organizations developed programs that they found useful, especially programs that had to be run on a regular basis, they dedicated a person to the scheduling and running of those tasks. That person's job was to ensure that the important programs were run on the right day, at the right time, with the right resources (card decks and magnetic tapes).
Computer manufacturers provided people for those roles, and also provided training for client employees to learn the skills of the "system operator". There was a profit for the manufacturer -- and a cost to be avoided (or at least minimized) by the client. Hence, only a few people were given the training.
Of the five "waves" of computing technology (mainframe, minicomputers, personal computers, networked PCs, and web servers) most started with a brief period of "one person does it all" and then shifted to a model that divided labor among specialists. Mainframes specialized with programmers and system operators (and later, database administrators). Personal computers, by their very nature, had one person but later specialists for word processing, databases, and desktop publishing. Networked PCs saw specialization with enterprise administrators (such as Windows domain administrators) and programmers each learning different skills.
It was the first specialization of tasks, in the early mainframe era, that set the tone for later specializations.
Today, we're moving away from specialization. I suspect that the "full stack" engineer is desired by managers who have tired of the arguments between specialists. Companies don't want to hear sysadmins and programmers bickering about who is at fault when an error occurs; they want solutions. Forcing sysadmins and programmers to "wear the same hat" eliminates the arguments. (Or so managers hope.)
The specialization of tasks on the different computing platforms happened because it was more efficient. The different jobs required different skills, and it was easier (and cheaper) to train some individuals for some tasks and other individuals for other tasks, and manage the two groups.
Perhaps the relative costs have changed. Perhaps, with our current technology, it is more difficult (and more expensive) to manage groups of specialists, and it is cheaper to train full-stack developers. That may say more about management skills than it does about technical skills.
But here is a question: Why was there a split in functions? Why did we have separate roles for developers and system administrators? Why didn't we have combined roles from the beginning?
Well, at the very beginning of the modern computing era, we did have a single role. But things became complicated, and specialization was profitable for the providers of computers. Let's go back in time.
We're going way back in time, back before the current cloud-based, container-driven age. Back before the "old school" web age. Before the age of networked (but not internet-connected) PCs, and even before the PC era. We're going further back, before minicomputers and before commercial mainframes such as the IBM System/360.
We're going back to the dawn of modern electronic computing. This was a time before the operating system, and individuals who wanted to use a computer had to write their own code (machine code, not a high-level language such as COBOL) and those programs managed memory and manipulated input-output devices such as card readers and line printers. A program had total control of the computer -- there was no multiprocessing -- and it ran until it finished. When one programmer was finished with the computer, a second programmer could use it.
In this age, the programmer was a "full stack" developer, handling memory allocation, data structures, input and output routines, business logic. There were no databases, no web servers, and no authentication protocols, but the programmer "did it all", including scheduling time on the computer with other programmers.
Once organizations developed programs that they found useful, especially programs that had to be run on a regular basis, they dedicated a person to the scheduling and running of those tasks. That person's job was to ensure that the important programs were run on the right day, at the right time, with the right resources (card decks and magnetic tapes).
Computer manufacturers provided people for those roles, and also provided training for client employees to learn the skills of the "system operator". There was a profit for the manufacturer -- and a cost to be avoided (or at least minimized) by the client. Hence, only a few people were given the training.
Of the five "waves" of computing technology (mainframe, minicomputers, personal computers, networked PCs, and web servers) most started with a brief period of "one person does it all" and then shifted to a model that divided labor among specialists. Mainframes specialized with programmers and system operators (and later, database administrators). Personal computers, by their very nature, had one person but later specialists for word processing, databases, and desktop publishing. Networked PCs saw specialization with enterprise administrators (such as Windows domain administrators) and programmers each learning different skills.
It was the first specialization of tasks, in the early mainframe era, that set the tone for later specializations.
Today, we're moving away from specialization. I suspect that the "full stack" engineer is desired by managers who have tired of the arguments between specialists. Companies don't want to hear sysadmins and programmers bickering about who is at fault when an error occurs; they want solutions. Forcing sysadmins and programmers to "wear the same hat" eliminates the arguments. (Or so managers hope.)
The specialization of tasks on the different computing platforms happened because it was more efficient. The different jobs required different skills, and it was easier (and cheaper) to train some individuals for some tasks and other individuals for other tasks, and manage the two groups.
Perhaps the relative costs have changed. Perhaps, with our current technology, it is more difficult (and more expensive) to manage groups of specialists, and it is cheaper to train full-stack developers. That may say more about management skills than it does about technical skills.
Wednesday, April 10, 2019
Program language and program size
Can programs be "too big"? Does it depend on the language?
In the 1990s, the two popular programming languages from Microsoft were Visual Basic and Visual C++. (Microsoft also offered Fortran and an assembler, and I think COBOL, but they were used rarely.)
I used both Visual Basic and Visual C++. With Visual Basic it was easy to create a Windows application, but the applications in Visual Basic were limited. You could not, for example, launch a modal dialog from within a modal dialog. Visual C++ was much more capable; you had the entire Windows API available to you. But the construction of Visual C++ applications took more time and effort. A simple Visual Basic application could be "up and running" in a minute. The simplest Visual C++ application took at least twenty minutes. Applications with dialogs took quite a bit of time in Visual C++.
Visual Basic was better for small applications. They could be written quickly, and changed quickly. Visual C++ was better for large applications. Larger applications required more design and coding (and more testing) but could handle more complex tasks. Also, the performance benefits of C++ were only obtained for large applications.
(I will note that Microsoft has improved the experience since those early days of Windows programming. The .NET framework has made a large difference. Microsoft has also improved the dialog editors and other tools in what is now called Visual Studio.)
That early Windows experience got me thinking: are some languages better at small programs, and other languages better at large programs? Small programs written in languages that require a lot of code (verbose languages) have a disadvantage because of the extra work. Visual C++ was a verbose language; Visual Basic was not -- or was less verbose. Other languages weigh in at different points on the scale of verbosity.
Consider a "word count" program. (That is, a program to count the words in a file.) Different languages require different amounts of code. At the small-program end of the scale we have languages such as AWK and Perl. At the large-end of the scale we have COBOL.
(I am considering lines of code here, and not executable size or the size of libraries. I don't count run-time environments or byte-code engines.)
I would much rather write (and maintain) the word-count program in AWK or Perl (or Ruby or Python). Not because these languages are modern, but because the program itself is small. (Trival, actually.) The program in COBOL is large; COBOL has some string-handling functions (but not many) and it requires a fair amount of overhead to define the program. A COBOL program is long, by design. The COBOL language is a verbose language.
Thus, there is an incentive to build small programs in certain languages. (I should probably say that there is an incentive to build certain programs in certain languages.)
But that is on the small end of the scale of programs. What about the other end? Is there an incentive to build large programs in certain languages?
I believe that the answer is yes. Just as some languages are good for small programs, other languages are good for large programs. The languages that are good for large programs have structures and constructs which help us humans manage and understand the code in large scale.
Over the years, we have developed several techniques we use to manage source code. They include:
These techniques help us by partitioning the code. We can "lump" and "split" the code into different subroutines, functions, modules, classes, and contexts. We can define rules to limit the information that is allowed to flow between the multiple "lumps" of a system. Limiting the flow of information simplifies the task of programming (or debugging, or documenting) a system.
Is there a point when a program is simply "too big" for a language?
I think there are two concepts lurking in that question. The first is a relative answer, and the second is an absolute answer.
Let's start with a hypothetical example. A mind experiment, if you will.
Let's imagine a program. It can be any program, but it is small and simple. (Perhaps it is "Hello, world!") Let's pick a language for our program. As the program is small, let's pick a language that is good for small programs. (It could be Visual Basic or AWK.)
Let's continue our experiment by increasing the size of our program. As this was a hypothetical program, we can easily expand it. (We don't have to write the actual code -- we simply expand the code in our mind.)
Now, keeping our program in mind, and remembering our initial choice of a programming language, let us consider other languages. Is there a point when we would like to switch from our chosen programming language to another language?
The relative answer applies to a language when compared to a different language. In my earlier example, I compared Visual Basic with Visual C++. Visual Basic was better for small programs, Visual C++ for large programs.
The exact point of change is not clear. It wasn't clear in the early days of Windows programming, either. But there must be a crossover point, where the situation changes from "better in Visual Basic" to "better in Visual C++".
The two languages don't have to be Visual Basic and Visual C++. They could be any pair. One could compare COBOL and assembler, or Java and Perl, or Go and Ruby. Each pair has its own crossover point, but the crossover point is there. Each pair of languages has a point in which it is better to select the more verbose language, because of its capabilities at managing large code.
That's the relative case, which considers two languages and picks the better of the two. Then there is the absolute case, which considers only one language.
For the absolute case, the question is not "Which is the better language for a given program?", but "Should we write a program in a given language?". That is, there may be some programs which are too large, too complex, too difficult to write in a specific programming language.
Well-informed readers will be aware that a program written in a language that is "Turing complete" can be translated into any other programming language that is also "Turing complete". That is not the point. The question is not "Can this program be written in a given language?" but "Should this program be written in a given language?".
That is a much subtler question, and much more subjective. I may consider a program "too big" for language X while another might consider it within bounds. I don't have metrics for such a decision -- and even if I did, one could argue that my cutoff point (a complexity value of 2000, say) is arbitrary and the better value is somewhat higher (perhaps 2750). One might argue that a more talented team can handle programs that are larger and more complex.
Someday we may have agreed-upon metrics, and someday we may have agreed-upon cutoff values. Someday we may be able to run our program through a tool for analysis, one that computes the complexity and compares the result to our cut-off values. Such a tool would be an impartial judge for the suitability of the programming language for our task. (Assuming that we write programs that are efficient and correct in the given programming language.)
Someday we may have all of that, and the discipline to discard (or re-design) programs that exceed the boundaries.
But we don't have that today.
In the 1990s, the two popular programming languages from Microsoft were Visual Basic and Visual C++. (Microsoft also offered Fortran and an assembler, and I think COBOL, but they were used rarely.)
I used both Visual Basic and Visual C++. With Visual Basic it was easy to create a Windows application, but the applications in Visual Basic were limited. You could not, for example, launch a modal dialog from within a modal dialog. Visual C++ was much more capable; you had the entire Windows API available to you. But the construction of Visual C++ applications took more time and effort. A simple Visual Basic application could be "up and running" in a minute. The simplest Visual C++ application took at least twenty minutes. Applications with dialogs took quite a bit of time in Visual C++.
Visual Basic was better for small applications. They could be written quickly, and changed quickly. Visual C++ was better for large applications. Larger applications required more design and coding (and more testing) but could handle more complex tasks. Also, the performance benefits of C++ were only obtained for large applications.
(I will note that Microsoft has improved the experience since those early days of Windows programming. The .NET framework has made a large difference. Microsoft has also improved the dialog editors and other tools in what is now called Visual Studio.)
That early Windows experience got me thinking: are some languages better at small programs, and other languages better at large programs? Small programs written in languages that require a lot of code (verbose languages) have a disadvantage because of the extra work. Visual C++ was a verbose language; Visual Basic was not -- or was less verbose. Other languages weigh in at different points on the scale of verbosity.
Consider a "word count" program. (That is, a program to count the words in a file.) Different languages require different amounts of code. At the small-program end of the scale we have languages such as AWK and Perl. At the large-end of the scale we have COBOL.
(I am considering lines of code here, and not executable size or the size of libraries. I don't count run-time environments or byte-code engines.)
I would much rather write (and maintain) the word-count program in AWK or Perl (or Ruby or Python). Not because these languages are modern, but because the program itself is small. (Trival, actually.) The program in COBOL is large; COBOL has some string-handling functions (but not many) and it requires a fair amount of overhead to define the program. A COBOL program is long, by design. The COBOL language is a verbose language.
Thus, there is an incentive to build small programs in certain languages. (I should probably say that there is an incentive to build certain programs in certain languages.)
But that is on the small end of the scale of programs. What about the other end? Is there an incentive to build large programs in certain languages?
I believe that the answer is yes. Just as some languages are good for small programs, other languages are good for large programs. The languages that are good for large programs have structures and constructs which help us humans manage and understand the code in large scale.
Over the years, we have developed several techniques we use to manage source code. They include:
- Multiple source files (#include files, copybooks, separate compiled files in a project, etc.)
- A library of subroutines and functions (the "standard library")
- A repository of libraries (CPAN, CRAN, gems, etc.)
- The ability to define subroutines
- The ability to define functions
- Object-oriented programming (the ability to define types)
- The ability to define interfaces
- Mix-in fragments of classes
- Lambdas and closures
These techniques help us by partitioning the code. We can "lump" and "split" the code into different subroutines, functions, modules, classes, and contexts. We can define rules to limit the information that is allowed to flow between the multiple "lumps" of a system. Limiting the flow of information simplifies the task of programming (or debugging, or documenting) a system.
Is there a point when a program is simply "too big" for a language?
I think there are two concepts lurking in that question. The first is a relative answer, and the second is an absolute answer.
Let's start with a hypothetical example. A mind experiment, if you will.
Let's imagine a program. It can be any program, but it is small and simple. (Perhaps it is "Hello, world!") Let's pick a language for our program. As the program is small, let's pick a language that is good for small programs. (It could be Visual Basic or AWK.)
Let's continue our experiment by increasing the size of our program. As this was a hypothetical program, we can easily expand it. (We don't have to write the actual code -- we simply expand the code in our mind.)
Now, keeping our program in mind, and remembering our initial choice of a programming language, let us consider other languages. Is there a point when we would like to switch from our chosen programming language to another language?
The relative answer applies to a language when compared to a different language. In my earlier example, I compared Visual Basic with Visual C++. Visual Basic was better for small programs, Visual C++ for large programs.
The exact point of change is not clear. It wasn't clear in the early days of Windows programming, either. But there must be a crossover point, where the situation changes from "better in Visual Basic" to "better in Visual C++".
The two languages don't have to be Visual Basic and Visual C++. They could be any pair. One could compare COBOL and assembler, or Java and Perl, or Go and Ruby. Each pair has its own crossover point, but the crossover point is there. Each pair of languages has a point in which it is better to select the more verbose language, because of its capabilities at managing large code.
That's the relative case, which considers two languages and picks the better of the two. Then there is the absolute case, which considers only one language.
For the absolute case, the question is not "Which is the better language for a given program?", but "Should we write a program in a given language?". That is, there may be some programs which are too large, too complex, too difficult to write in a specific programming language.
Well-informed readers will be aware that a program written in a language that is "Turing complete" can be translated into any other programming language that is also "Turing complete". That is not the point. The question is not "Can this program be written in a given language?" but "Should this program be written in a given language?".
That is a much subtler question, and much more subjective. I may consider a program "too big" for language X while another might consider it within bounds. I don't have metrics for such a decision -- and even if I did, one could argue that my cutoff point (a complexity value of 2000, say) is arbitrary and the better value is somewhat higher (perhaps 2750). One might argue that a more talented team can handle programs that are larger and more complex.
Someday we may have agreed-upon metrics, and someday we may have agreed-upon cutoff values. Someday we may be able to run our program through a tool for analysis, one that computes the complexity and compares the result to our cut-off values. Such a tool would be an impartial judge for the suitability of the programming language for our task. (Assuming that we write programs that are efficient and correct in the given programming language.)
Someday we may have all of that, and the discipline to discard (or re-design) programs that exceed the boundaries.
But we don't have that today.
Thursday, March 28, 2019
Spring cleaning
Spring is in the air! Time for a general cleaning.
An IT shop of any significant size will have old technologies. Some folks will call them "legacy applications". Other folks try not to think about them. But a responsible manager will take inventory of the technology in his (or her) shop and winnow out those that are not serving their purpose (or are posing threats).
Here are some ideas for tech to get rid of:
Perl: I have used Perl. When the alternatives were C++ and Java, Perl was great. We could write programs quickly, and they tended to be small and easy to read. (Well, sort of easy to read.)
Actually, Perl programs were often difficult to read. And they still are difficult to read.
With languages such as Python and Ruby, I'm not sure that we need Perl. (Yes, there may be a module or library that works only with Perl. But they are few.)
Recommendation: If you have no compelling reason to stay with Perl, move to Python.
Visual Basic and VB.NET: Visual Basic (the non-.NET version), is old and difficult to support. It will only become older and more difficult to support. It does not fit in with web development -- much less cloud development. VB.NET has always been a second chair to C#.
Recommendation: Migrate from VB.NET to C#. Migrate from Visual Basic to anything (except Perl).
Any version of Windows other than Windows 10: Windows 10 has been with us for years. There is no reason to hold on to Windows 8 or Windows 7 (or Windows Vista).
If you have applications that can run only on Windows 7 or Windows 8, you have an application that will eventually die.
You don't have to move to Windows 10. You can move some applications to Linux, for example. If people are using only web-based applications, you can issue them ChromeBooks or low-end Windows laptops.
Recommendation: Replace older versions of Windows with Windows 10, Linux, or Chrome OS.
CVS and Subversion: Centralized version control systems require administration, which translates into expense. Distributed version control systems often cost less to administer, once you teach people how to use them. (The transition is not always easy, and the conversion costs are not zero, but in the long run the distributed systems will cost you less.)
Recommendation: Move to git.
Everyone has old technology. The wise manager knows about it and decides when to replace it. The foolish manager ignores the old technology, and often replaces it when forced to by external events, and not at a time of his choosing.
Be a wise manager. Take inventory of your technology, assess risk, and build a plan for replacements and upgrades.
An IT shop of any significant size will have old technologies. Some folks will call them "legacy applications". Other folks try not to think about them. But a responsible manager will take inventory of the technology in his (or her) shop and winnow out those that are not serving their purpose (or are posing threats).
Here are some ideas for tech to get rid of:
Perl: I have used Perl. When the alternatives were C++ and Java, Perl was great. We could write programs quickly, and they tended to be small and easy to read. (Well, sort of easy to read.)
Actually, Perl programs were often difficult to read. And they still are difficult to read.
With languages such as Python and Ruby, I'm not sure that we need Perl. (Yes, there may be a module or library that works only with Perl. But they are few.)
Recommendation: If you have no compelling reason to stay with Perl, move to Python.
Visual Basic and VB.NET: Visual Basic (the non-.NET version), is old and difficult to support. It will only become older and more difficult to support. It does not fit in with web development -- much less cloud development. VB.NET has always been a second chair to C#.
Recommendation: Migrate from VB.NET to C#. Migrate from Visual Basic to anything (except Perl).
Any version of Windows other than Windows 10: Windows 10 has been with us for years. There is no reason to hold on to Windows 8 or Windows 7 (or Windows Vista).
If you have applications that can run only on Windows 7 or Windows 8, you have an application that will eventually die.
You don't have to move to Windows 10. You can move some applications to Linux, for example. If people are using only web-based applications, you can issue them ChromeBooks or low-end Windows laptops.
Recommendation: Replace older versions of Windows with Windows 10, Linux, or Chrome OS.
CVS and Subversion: Centralized version control systems require administration, which translates into expense. Distributed version control systems often cost less to administer, once you teach people how to use them. (The transition is not always easy, and the conversion costs are not zero, but in the long run the distributed systems will cost you less.)
Recommendation: Move to git.
Everyone has old technology. The wise manager knows about it and decides when to replace it. The foolish manager ignores the old technology, and often replaces it when forced to by external events, and not at a time of his choosing.
Be a wise manager. Take inventory of your technology, assess risk, and build a plan for replacements and upgrades.
Wednesday, March 27, 2019
Something new for programming
One of my recent side projects involved R and R Studio. R is a programming language, an interpreted language with powerful data manipulation capabilities.
I am not impressed with R and I am quite disappointed with R Studio. I have ranted about them in a previous post. But in my ... excitement ... of R and R Studio, I missed the fact that we have something new in programming.
That something new is a new form of IDE, one that has several features:
R Studio has a desktop version, which you install and run locally. It also has a cloud-based version -- all you need is a browser, an internet connection, and an account. The online version looks exactly like the desktop version -- something that I think will change as the folks at R Studio add features.
R Studio's puts code and documentation into the same file. R Studio uses a variant of Markdown language (named 'Rmd').
The concept of comments in code is not new. Comments are usually short text items that are denoted with special markers ('//' in C++ and '#' in many languages). The model has always been: code contains comments and the comments are denoted by specific characters or sequences.
Rmd inverts that model: You write a document and denote the code with special markers ('$$' for TeX and '```' for code). Instead of comments (small documents) in your code, you have code in your document.
R Studio runs your code -- all of it or a section that you specify -- and displays the results as part of your document. It is smart enough to pick through the document and identify the code.
The concept of code and documentation in one file is not exclusive to R Studio. There are other tools that do the same thing: Jupyter notebooks, Mozilla's Iodide, and Mathematica (possibly the oldest of the lot). Each allow for text and code, with output. Each also allow for sharing.
At a high level, these online IDEs do the same thing: Create a document, add code, see the results, and share.
Over the years, we've shared code through various means: physical media (punch cards, paper tape, magnetic tape, floppy disk), shared storage locations (network disks), and version-control repositories (CVS, Subversion, Github). All of these methods required some effort. The new online-IDEs reduce that effort; no need to attach files to e-mail, just send a link.
There are a few major inflection points in software development, and I believe that this is one of them. I expect the concept of mixing text and code and results will become popular. I expect the notion of sharing projects (the text, the code, the results) will become popular.
I don't expect all programs (or all programmers) to move to this model. Large systems, especially those with hard performance requirements, will stay in the traditional compile-deploy-run model with separate documentation.
I see this new model of document-code-results as a new form of programming, one that will open new areas. The document-code-results combination is a good match for sharing work and results, and is close in concept to academic and scientific journals (which contain text, analysis, and results of that analysis).
Programming languages have become powerful, and that supports this new model. A Fortran program for simulating water in a pipe required eight to ten pages; the Matlab language can perform the same work in roughly half a page. Modern languages are more concise and can present their ideas without the overhead of earlier computer languages. A small snippet of code is enough to convey a complex study. This makes them suitable for analysis and especially suitable for sharing code.
It won't be traditional programmers who flock to the document-code-results-share model. Instead it will be non-programmers who can use the analysis in their regular jobs.
The online IDE supports a project with these characteristics:
I am not impressed with R and I am quite disappointed with R Studio. I have ranted about them in a previous post. But in my ... excitement ... of R and R Studio, I missed the fact that we have something new in programming.
That something new is a new form of IDE, one that has several features:
- on-line (cloud-based)
- mixes code and documentation
- immediate display of output
- can share the code, document, and results
R Studio has a desktop version, which you install and run locally. It also has a cloud-based version -- all you need is a browser, an internet connection, and an account. The online version looks exactly like the desktop version -- something that I think will change as the folks at R Studio add features.
R Studio's puts code and documentation into the same file. R Studio uses a variant of Markdown language (named 'Rmd').
The concept of comments in code is not new. Comments are usually short text items that are denoted with special markers ('//' in C++ and '#' in many languages). The model has always been: code contains comments and the comments are denoted by specific characters or sequences.
Rmd inverts that model: You write a document and denote the code with special markers ('$$' for TeX and '```' for code). Instead of comments (small documents) in your code, you have code in your document.
R Studio runs your code -- all of it or a section that you specify -- and displays the results as part of your document. It is smart enough to pick through the document and identify the code.
The concept of code and documentation in one file is not exclusive to R Studio. There are other tools that do the same thing: Jupyter notebooks, Mozilla's Iodide, and Mathematica (possibly the oldest of the lot). Each allow for text and code, with output. Each also allow for sharing.
At a high level, these online IDEs do the same thing: Create a document, add code, see the results, and share.
Over the years, we've shared code through various means: physical media (punch cards, paper tape, magnetic tape, floppy disk), shared storage locations (network disks), and version-control repositories (CVS, Subversion, Github). All of these methods required some effort. The new online-IDEs reduce that effort; no need to attach files to e-mail, just send a link.
There are a few major inflection points in software development, and I believe that this is one of them. I expect the concept of mixing text and code and results will become popular. I expect the notion of sharing projects (the text, the code, the results) will become popular.
I don't expect all programs (or all programmers) to move to this model. Large systems, especially those with hard performance requirements, will stay in the traditional compile-deploy-run model with separate documentation.
I see this new model of document-code-results as a new form of programming, one that will open new areas. The document-code-results combination is a good match for sharing work and results, and is close in concept to academic and scientific journals (which contain text, analysis, and results of that analysis).
Programming languages have become powerful, and that supports this new model. A Fortran program for simulating water in a pipe required eight to ten pages; the Matlab language can perform the same work in roughly half a page. Modern languages are more concise and can present their ideas without the overhead of earlier computer languages. A small snippet of code is enough to convey a complex study. This makes them suitable for analysis and especially suitable for sharing code.
It won't be traditional programmers who flock to the document-code-results-share model. Instead it will be non-programmers who can use the analysis in their regular jobs.
The online IDE supports a project with these characteristics:
- small code
- multiple people
- output is easily visualizable
- sharing and enlightenment, not production
Labels:
documentation,
interactive,
Iodide,
Jupyter,
Mathematica,
programming,
R Studio
Tuesday, March 19, 2019
C++ gets serious
I'm worried that C++ is getting too ... complicated.
I am not worries that C++ is a dead language. It is not. The C++ standards committee has adopted several changes over the years, releasing new C++ standards. C++11. C++14. C++17 is the most recent. C++20 is in progress. Compiler vendors are implementing the new standards. (Microsoft has done an admirable job in their latest versions of their C++ compiler.)
But the changes are impressive -- and intimidating. Even the names of the changes are daunting:
Here is an example of range, which simplifies the common "iterate over a collection" loop:
int array[5] = { 1, 2, 3, 4, 5 };
for (int& x : array)
x *= 2;
This is a nice improvement. Notice that it does not use STL iterators; this is pure C++ code.
Somewhat more complex is an implementation of the spaceship operator:
template
struct pair {
T t;
U u;
auto operator<=> (pair const& rhs) const
-> std::common_comparison_category_t<
decltype(std::compare_3way(t, rhs.t)),
decltype(std::compare_3way(u, rhs.u)>
{
if (auto cmp = std::compare_3way(t, rhs.t); cmp != 0)
return cmp;
return std::compare3_way(u, rhs.u);
}
}
That code seems... not so obvious.
The non-obviousness of code doesn't end there.
Look at two functions, one for value types and one for all types (value and reference types):
For simple value types, for our two functions, we can write the following code:
std::for_each(vi.begin(), vi.end(), [](auto x) { return foo(x); });
The most generic form:
#define LIFT(foo) \
[](auto&... x) \
noexcept(noexcept(foo(std::forward(x)...))) \
-> decltype(foo(std::forward(x)...)) \
{ return foo(std::forward(x)...); }
I will let you ponder that bit of "trivial" code.
Notice that the last example uses the #define macro to do its work, with '\' characters to continue the macro across multiple lines.
* * *
I have been pondering that code (and more) for some time.
- C++ is becoming more capable, but also more complex. It is now far from the "C with Classes" that was the start of C++.
- C++ is not obsolete, but it is for applications with specific needs. C++ does offer fine control over memory management and can provide predictable run-time performance, which are advantages for embedded applications. But if you don't need the specific advantages of C++, I see little reason to invest the extra effort to learn and maintain C++.
- Development work will favor other languages, mostly Java, C#, Python, JavaScript, and Go. Java and C# have become the "first choice" languages for business applications; Python has become the "first choice" for one's first language. The new features of C++, while useful for specific applications, will probably discourage the average programmer. I'm not expecting schools to teach C++ as a first language again -- ever.
- There will remain a lot of C++ code, but C++'s share of "the world of code" will become smaller. Some of this is due to systems being written in other languages. But I'm willing to bet that the total lines of code for C++ (if we could measure it) is shrinking in absolute numbers.
All of this means that C++ development will become more expensive.
There will be fewer C++ programmers. C++ is not the language taught in schools (usually) and it is not the language taught in the "intro to programming" courses. People will not learn C++ as a matter of course; only those who really want to learn it will make the effort.
C++ will be limited to the projects that need the features of C++, projects which are larger and more complex. Projects that are "simple" and "average" will use other languages. It will be the complicated projects, the projects that need high performance, the projects that need well-defined (and predictable) memory management which will use C++.
C++ will continue as a language. It will be used on the high end projects, with specific requirements and high performance. The programmers who know C++ will have to know how to work on those projects -- amateurs and dabblers will not be welcome. If you are managing projects, and you want to stay with C++, be prepared to hunt for talent and be prepared to pay.
I am not worries that C++ is a dead language. It is not. The C++ standards committee has adopted several changes over the years, releasing new C++ standards. C++11. C++14. C++17 is the most recent. C++20 is in progress. Compiler vendors are implementing the new standards. (Microsoft has done an admirable job in their latest versions of their C++ compiler.)
But the changes are impressive -- and intimidating. Even the names of the changes are daunting:
- contracts, with preconditions and postconditions
- concepts
- transactional memory
- ranges
- networking
- modules
- concurrency
- coroutines
- reflection
- spaceship operator
Here is an example of range, which simplifies the common "iterate over a collection" loop:
int array[5] = { 1, 2, 3, 4, 5 };
for (int& x : array)
x *= 2;
This is a nice improvement. Notice that it does not use STL iterators; this is pure C++ code.
Somewhat more complex is an implementation of the spaceship operator:
template
struct pair {
T t;
U u;
auto operator<=> (pair const& rhs) const
-> std::common_comparison_category_t<
decltype(std::compare_3way(t, rhs.t)),
decltype(std::compare_3way(u, rhs.u)>
{
if (auto cmp = std::compare_3way(t, rhs.t); cmp != 0)
return cmp;
return std::compare3_way(u, rhs.u);
}
}
That code seems... not so obvious.
The non-obviousness of code doesn't end there.
Look at two functions, one for value types and one for all types (value and reference types):
For simple value types, for our two functions, we can write the following code:
std::for_each(vi.begin(), vi.end(), [](auto x) { return foo(x); });
The most generic form:
#define LIFT(foo) \
[](auto&... x) \
noexcept(noexcept(foo(std::forward
-> decltype(foo(std::forward
{ return foo(std::forward
Notice that the last example uses the #define macro to do its work, with '\' characters to continue the macro across multiple lines.
* * *
I have been pondering that code (and more) for some time.
- C++ is becoming more capable, but also more complex. It is now far from the "C with Classes" that was the start of C++.
- C++ is not obsolete, but it is for applications with specific needs. C++ does offer fine control over memory management and can provide predictable run-time performance, which are advantages for embedded applications. But if you don't need the specific advantages of C++, I see little reason to invest the extra effort to learn and maintain C++.
- Development work will favor other languages, mostly Java, C#, Python, JavaScript, and Go. Java and C# have become the "first choice" languages for business applications; Python has become the "first choice" for one's first language. The new features of C++, while useful for specific applications, will probably discourage the average programmer. I'm not expecting schools to teach C++ as a first language again -- ever.
- There will remain a lot of C++ code, but C++'s share of "the world of code" will become smaller. Some of this is due to systems being written in other languages. But I'm willing to bet that the total lines of code for C++ (if we could measure it) is shrinking in absolute numbers.
All of this means that C++ development will become more expensive.
There will be fewer C++ programmers. C++ is not the language taught in schools (usually) and it is not the language taught in the "intro to programming" courses. People will not learn C++ as a matter of course; only those who really want to learn it will make the effort.
C++ will be limited to the projects that need the features of C++, projects which are larger and more complex. Projects that are "simple" and "average" will use other languages. It will be the complicated projects, the projects that need high performance, the projects that need well-defined (and predictable) memory management which will use C++.
C++ will continue as a language. It will be used on the high end projects, with specific requirements and high performance. The programmers who know C++ will have to know how to work on those projects -- amateurs and dabblers will not be welcome. If you are managing projects, and you want to stay with C++, be prepared to hunt for talent and be prepared to pay.
Thursday, March 14, 2019
A Relaxed Waterfall
We're familiar with the two development methods: Waterfall and Agile. Waterfall operates in a sequence of large steps: gather requirements, design the system, build the system, test the system, and deploy the system; each step must wait for the prior step to complete before it starts. Agile uses a series of iterations that each involve specifying, implementing and testing a new feature.
Waterfall's advantage is that it promises delivery on a specific date. Agile makes no such promise, but instead promises that you can always ship whatever you have built.
Suppose there was a third method?
How about a modified version of Waterfall: the normal Waterfall but no due date -- no schedule.
This may seem a bit odd, and even nonsensical. After all, the reason people like Waterfall is the big promise of delivery on a specific date. Bear with me.
If we change Waterfall to remove the due date, we can build a very different process. The typical Waterfall project runs a number of phases (analysis, design, coding, etc.) and there is pressure to, once a phase has been completed, to never go back. One cannot go back; the schedule demands that the next phase begin. Going back from coding, say, because you find ambiguities in the requirements, means spending more time in the analysis phase and that will (most likely) delay the coding phase, which will then delay the testing phase, ... and the delays reach all the way to the delivery date.
But if we remove the delivery date, then there is no pressure of missing the delivery date! We can move back from coding to analysis, or from testing to coding, with no risk. What would that give us?
For starters, the process would be more like Agile development. Agile makes no promise about a specific delivery date, and neither does what I call the "Relaxed Waterfall" method.
A second effect is that we can now move backwards in the cycle. If we complete the first phase (Analysis) and start the second phase (Design) and then find errors or inconsistencies, we can move back to the first phase. We are under no pressure to complete the Design phase "on schedule" so we can restart the analysis and get better information.
The same holds for the shift from Design to the third phase (Coding). If we start coding and find ambiguities, we can easily jump back to Design (or even Analysis) to resolve questions and ensure a complete specification.
While Relaxed Waterfall may sound exactly like Agile, it has differences. We can divide the work into different teams, one team handling each phase. You can have a team that specializes in analysis and the documentation of requirements, a second team that specializes in design, a third team for coding, and a fourth team for testing. The advantage is that people can specialize; Agile requires that all team members know how to design, code, test, and deploy a product. For large projects the latter approach may be infeasible.
This is all speculation. I have not tried to manage a project with Relaxed Waterfall techniques. I suspect that my first attempt might fail. (But then, early attempts with traditional Waterfall failed, too. We would need practice.) And there is no proof that a project run with Relaxed Waterfall would yield a better result.
It was merely an interesting musing.
But maybe it could work.
Waterfall's advantage is that it promises delivery on a specific date. Agile makes no such promise, but instead promises that you can always ship whatever you have built.
Suppose there was a third method?
How about a modified version of Waterfall: the normal Waterfall but no due date -- no schedule.
This may seem a bit odd, and even nonsensical. After all, the reason people like Waterfall is the big promise of delivery on a specific date. Bear with me.
If we change Waterfall to remove the due date, we can build a very different process. The typical Waterfall project runs a number of phases (analysis, design, coding, etc.) and there is pressure to, once a phase has been completed, to never go back. One cannot go back; the schedule demands that the next phase begin. Going back from coding, say, because you find ambiguities in the requirements, means spending more time in the analysis phase and that will (most likely) delay the coding phase, which will then delay the testing phase, ... and the delays reach all the way to the delivery date.
But if we remove the delivery date, then there is no pressure of missing the delivery date! We can move back from coding to analysis, or from testing to coding, with no risk. What would that give us?
For starters, the process would be more like Agile development. Agile makes no promise about a specific delivery date, and neither does what I call the "Relaxed Waterfall" method.
A second effect is that we can now move backwards in the cycle. If we complete the first phase (Analysis) and start the second phase (Design) and then find errors or inconsistencies, we can move back to the first phase. We are under no pressure to complete the Design phase "on schedule" so we can restart the analysis and get better information.
The same holds for the shift from Design to the third phase (Coding). If we start coding and find ambiguities, we can easily jump back to Design (or even Analysis) to resolve questions and ensure a complete specification.
While Relaxed Waterfall may sound exactly like Agile, it has differences. We can divide the work into different teams, one team handling each phase. You can have a team that specializes in analysis and the documentation of requirements, a second team that specializes in design, a third team for coding, and a fourth team for testing. The advantage is that people can specialize; Agile requires that all team members know how to design, code, test, and deploy a product. For large projects the latter approach may be infeasible.
This is all speculation. I have not tried to manage a project with Relaxed Waterfall techniques. I suspect that my first attempt might fail. (But then, early attempts with traditional Waterfall failed, too. We would need practice.) And there is no proof that a project run with Relaxed Waterfall would yield a better result.
It was merely an interesting musing.
But maybe it could work.
Subscribe to:
Posts (Atom)