The recent outbreak of the Novel Coronavirus has forced many changes upon us. One of the biggest changes is the forced shift to remote work. Remote work is different from face-to-face work, as many organizations are now learning. Remote work has challenges, and it also offers the ability to change our work patterns.
Two ideas come to mind.
Companies with project teams, and specifically those with weekly status meetings, may learn that those weekly status meetings can be replaced with status e-mails. One can, of course, replace a face-to-face team meeting with an online team meeting, and I am sure many companies do. But status meetings often have one-way flows of information: individuals report on their current activities, and the manager provides information to his team. These one-way flows can be handled with e-mail, which allows for archiving and easy retrieval of content. A manager could have one-on-one sessions with individuals to ask follow-up questions, and to "touch base" with team members.
Colleges and universities, specifically those with graduate programs, may learn that they can adjust their class schedules. Graduate classes are often held in the evening, and often one night per week. The reason for this schedule is that graduate students have other commitments (such as a job) that prevents them from attending during the day. (Colleges also use the same classrooms for undergraduate studies during the day.) The one-class-per-week arrangement is also convenient, as graduate students do not live on campus and have significantly longer commute times than on-campus undergraduate students.
But with remote studies, there is no need to hold one long class per week. The time to connect to a remote session is short. One could just as easily hold three short sessions per week, instead of one long session. (The need for evening classes still holds, as many students still work during the day.) Multiple shorter sessions may be more effective than a single long session (they seem to be for undergraduate students) and may be more convenient for teachers and students.
But people being people, I expect little to change in the short term -- on colleges or in corporate offices. Professors have organized classes and materials around the one-night-per-week schedule. Scheduling systems (probably) have built-in assumptions that graduate classes occur on a one-night-per-week schedule, and changing those systems may be nontrivial. Corporate managers are used to the idea of seeing all of their employees in a single meeting (and possibly feel that they are more efficient holding a single meeting than multiple one-on-one sessions).
Remote work and remote studies do not have to be copies of their face-to-face counterparts. Someday they may not be.
Thursday, March 26, 2020
Thursday, March 19, 2020
The Lesson from BASIC, Visual Basic, and VB.NET
This week Microsoft announced that VB.NET would receive no enhancements. In effect, Microsoft has announced the death of VB.NET. And while some developers may grieve over the loss of their favorite language, we should look at the lesson of VB.NET.
But first, let's review the history of BASIC, the predecessor of VB.NET.
BASIC has a long history, and Microsoft was there for most of it. Invented in the mid-1960s, BASIC was a simple interpreter, designed for timeshare systems and people who were not programmers. The major competing languages, the programming languages one could use instead of BASIC were FORTRAN and COBOL. BASIC, while less powerful, was much easier to use than any of the alternatives.
Small home computers were a natural for BASIC. Microsoft saw an opportunity and built BASIC for the Apple II, Commodore's PET and CBM, Radio Shack's TRS-80, and many others. Wherever you turned, BASIC was available. It was the lingua franca of programming, which made it valuable.
BASIC was popular, but its roots in timeshare made it a text-oriented language. (To be fair, all other languages were text-oriented, too.) As computers become more popular, programmers had to manipulate hardware directly to use special effects such as colors and graphics. Microsoft helped, by enhancing the language with commands for graphics and other hardware such as sound. BASIC remained the premiere language for programming because it was powerful enough (or good enough) to get the job done.
Microsoft's Windows posed a challenge to BASIC. Even with its enhancements for graphics, BASIC was not compatible with the event-driven model of Windows. Microsoft's answer was Visual Basic, a new language that shared some keywords with BASIC but little else. The new Visual Basic was a completely different language, even more powerful than the biggest "Disk BASIC" Microsoft ever released. Microsoft's other language for Windows, Visual C++, was powerful but hard to use. Visual Basic was less powerful but much easier to use, and it had better support for COM. The ease of use and COM support provided value to developers.
Microsoft's .NET posed a second challenge to Visual Basic, which was not compatible with the new architecture the .NET framework. Microsoft's answer was VB.NET, a redesign that looked a lot like C# but with some keywords retained from Visual Basic.
For the past two decades (almost), VB.NET has been a supported language in Microsoft's world, living beside C#. That coexistence now comes to an end, with C# getting upgrades and VB.NET getting... very little.
The problem with VB.NET (as I see it) is that VB.NET was too close, too similar to C#. VB.NET offered little that was different (or better) than C#. Thus, when picking a language for a new project, one could pick C# or VB.NET and be assured that it would work.
But being similar, for programming languages, is not a good thing. Different languages should be different. They should offer different programming constructs and different capabilities, because the differences can provide value.
C++ is different from C, and while C++ can compile and run every C program, the differences between C++ and C are enough that both languages offer value.
Python and Ruby are different enough that both can exist. Both offer value to the programmer.
C# and Java are close cousins, and one could argue that they too are too similar to co-exist. For this case, it may be that the sponsoring companies (Microsoft for C#, Oracle for Java) is enough of a difference. For these languages, the relationship with the sponsoring company is the value.
But VB.NET was too close to C#. Anything you could do in VB.NET you could do in C#, and usually with no additional effort. VB.NET offered nothing of distinct value.
We should note that the end of VB.NET does not mean the end of BASIC. There are other versions of BASIC, each quite different from VB.NET and C#. These different versions may continue to thrive and provide value to programmers.
Sometimes, being different is important.
But first, let's review the history of BASIC, the predecessor of VB.NET.
BASIC has a long history, and Microsoft was there for most of it. Invented in the mid-1960s, BASIC was a simple interpreter, designed for timeshare systems and people who were not programmers. The major competing languages, the programming languages one could use instead of BASIC were FORTRAN and COBOL. BASIC, while less powerful, was much easier to use than any of the alternatives.
Small home computers were a natural for BASIC. Microsoft saw an opportunity and built BASIC for the Apple II, Commodore's PET and CBM, Radio Shack's TRS-80, and many others. Wherever you turned, BASIC was available. It was the lingua franca of programming, which made it valuable.
BASIC was popular, but its roots in timeshare made it a text-oriented language. (To be fair, all other languages were text-oriented, too.) As computers become more popular, programmers had to manipulate hardware directly to use special effects such as colors and graphics. Microsoft helped, by enhancing the language with commands for graphics and other hardware such as sound. BASIC remained the premiere language for programming because it was powerful enough (or good enough) to get the job done.
Microsoft's Windows posed a challenge to BASIC. Even with its enhancements for graphics, BASIC was not compatible with the event-driven model of Windows. Microsoft's answer was Visual Basic, a new language that shared some keywords with BASIC but little else. The new Visual Basic was a completely different language, even more powerful than the biggest "Disk BASIC" Microsoft ever released. Microsoft's other language for Windows, Visual C++, was powerful but hard to use. Visual Basic was less powerful but much easier to use, and it had better support for COM. The ease of use and COM support provided value to developers.
Microsoft's .NET posed a second challenge to Visual Basic, which was not compatible with the new architecture the .NET framework. Microsoft's answer was VB.NET, a redesign that looked a lot like C# but with some keywords retained from Visual Basic.
For the past two decades (almost), VB.NET has been a supported language in Microsoft's world, living beside C#. That coexistence now comes to an end, with C# getting upgrades and VB.NET getting... very little.
The problem with VB.NET (as I see it) is that VB.NET was too close, too similar to C#. VB.NET offered little that was different (or better) than C#. Thus, when picking a language for a new project, one could pick C# or VB.NET and be assured that it would work.
But being similar, for programming languages, is not a good thing. Different languages should be different. They should offer different programming constructs and different capabilities, because the differences can provide value.
C++ is different from C, and while C++ can compile and run every C program, the differences between C++ and C are enough that both languages offer value.
Python and Ruby are different enough that both can exist. Both offer value to the programmer.
C# and Java are close cousins, and one could argue that they too are too similar to co-exist. For this case, it may be that the sponsoring companies (Microsoft for C#, Oracle for Java) is enough of a difference. For these languages, the relationship with the sponsoring company is the value.
But VB.NET was too close to C#. Anything you could do in VB.NET you could do in C#, and usually with no additional effort. VB.NET offered nothing of distinct value.
We should note that the end of VB.NET does not mean the end of BASIC. There are other versions of BASIC, each quite different from VB.NET and C#. These different versions may continue to thrive and provide value to programmers.
Sometimes, being different is important.
Wednesday, March 11, 2020
Holding back time
An old episode of the science-fiction series "Dr. Who", the villain roams the galaxy and captures entire planets, all to power "time dams" used to prevent his tribal matriarch from dying. The efforts were in vain, because while one can delay changes one cannot hold them back indefinitely.
A lot of effort in IT is also spent on "keeping time still" or preventing changes.
Projects using a "waterfall" process prevent changes by agreeing early on to the requirements, and then "freezing" the requirements. A year-long project can start with two-month phase to gather, review, and finalize requirements; the remainder of the year is devoted to implementing those requirements, exactly as agreed, with no changes or additions. The result is often disappointing. Delivered systems were incorrect (because the requirements, despite review, were incorrect) or incomplete (for the same reason) and even if neither of those were true the requirements were a year out of date. Time had progressed, and changes had occurred, outside of the project "bubble".
Some waterfall-managed projects allow for changes, usually with an onerous "change control" process that requires a description and justification of the change and agreement (again) among all of the concerned parties. This allows for changes, but puts a "brake" on them, limiting the number and scope of changes.
But project management methodologies are not the only way we try to hold back time. Other areas that we try to prevent changes include:
Python's "requirements.txt" file, which lists the required packages. When used responsibly, it lists the required packages and the minimum version of each package. (A good idea as one does need to know the packages and the versions, and this was a consistent method.) Some projects try to hold back changes by specifying an exact version of a package (such as "must be version 1.4 and no other") in fear that a later version may break something.
Locking components to specific versions will eventually fail: a component will not be available, or the specified version will not work on a new operating system or in a new version of the interpreter. (Perhaps even the Python interpreter itself, if held back in this manner, will fail.)
Containers, which contain the "things that an application needs". Many "containerized" applications contain a database and the database software, but they can also include other utilities. The container holds a frozen set of applications and libraries, installed each time the container is deployed. While they can be updated that doesn't mean they are updated.
Those utilities and libraries that are "frozen in time" will eventually cause problems. They are not stand-alone; they often rely on other utilities and libraries, which may not be present in the container. At some point, the "outside" libraries will not work for the "inside" applications.
Virtual machines to run old versions of operating systems, to run old versions of applications that run only on old versions of operating systems. Virtual machines can be used for other purposes, and this is yet another form of "holding back time".
Virtual machines with old versions of operating systems, running old versions of applications, also have problems. Their ability to communicate with other systems on the network will (probably) break, due to expired certificates or a change in a protocol.
All of these techniques pretend to solve a problem. But they are not really solutions -- they simply delay the problem. Eventually, you will have an incompatibility, somewhere. But that isn't the biggest problem.
The biggest problem may be in thinking that you don't have a problem.
A lot of effort in IT is also spent on "keeping time still" or preventing changes.
Projects using a "waterfall" process prevent changes by agreeing early on to the requirements, and then "freezing" the requirements. A year-long project can start with two-month phase to gather, review, and finalize requirements; the remainder of the year is devoted to implementing those requirements, exactly as agreed, with no changes or additions. The result is often disappointing. Delivered systems were incorrect (because the requirements, despite review, were incorrect) or incomplete (for the same reason) and even if neither of those were true the requirements were a year out of date. Time had progressed, and changes had occurred, outside of the project "bubble".
Some waterfall-managed projects allow for changes, usually with an onerous "change control" process that requires a description and justification of the change and agreement (again) among all of the concerned parties. This allows for changes, but puts a "brake" on them, limiting the number and scope of changes.
But project management methodologies are not the only way we try to hold back time. Other areas that we try to prevent changes include:
Python's "requirements.txt" file, which lists the required packages. When used responsibly, it lists the required packages and the minimum version of each package. (A good idea as one does need to know the packages and the versions, and this was a consistent method.) Some projects try to hold back changes by specifying an exact version of a package (such as "must be version 1.4 and no other") in fear that a later version may break something.
Locking components to specific versions will eventually fail: a component will not be available, or the specified version will not work on a new operating system or in a new version of the interpreter. (Perhaps even the Python interpreter itself, if held back in this manner, will fail.)
Containers, which contain the "things that an application needs". Many "containerized" applications contain a database and the database software, but they can also include other utilities. The container holds a frozen set of applications and libraries, installed each time the container is deployed. While they can be updated that doesn't mean they are updated.
Those utilities and libraries that are "frozen in time" will eventually cause problems. They are not stand-alone; they often rely on other utilities and libraries, which may not be present in the container. At some point, the "outside" libraries will not work for the "inside" applications.
Virtual machines to run old versions of operating systems, to run old versions of applications that run only on old versions of operating systems. Virtual machines can be used for other purposes, and this is yet another form of "holding back time".
Virtual machines with old versions of operating systems, running old versions of applications, also have problems. Their ability to communicate with other systems on the network will (probably) break, due to expired certificates or a change in a protocol.
All of these techniques pretend to solve a problem. But they are not really solutions -- they simply delay the problem. Eventually, you will have an incompatibility, somewhere. But that isn't the biggest problem.
The biggest problem may be in thinking that you don't have a problem.
Thursday, March 5, 2020
Programming languages differ
A developer recently ranted about the Go language. The rant was less about the language and more about run-time libraries and interfaces to underlying operating systems and their file systems. The gist of his rant is that Go is not a suitable programming language for performing certain operations on certain operating systems, and therefore he is "done" with Go.
The first part of his rant is correct. I disagree with the conclusion.
We have the idea that programming languages are general-purpose, that any language can be used for any problem. But programming didn't start that way. Programming languages started unequal, with different languages designed for different types of computing. And while languages have become less specific and more general-purpose, they are still not equal.
I think our idea about a general-purpose programming language (or perhaps an all-purpose programming language) started with the IBM System/360 in the 1960s and the PL/1 programming language.
Prior to the System/360, computers had specific purposes. Some computers were designed for numeric processing, and others were designed for transaction processing. Not only were computers specific to a purpose, but programming languages were, too. FORTRAN was for numeric processing, and used on computers built for numeric processing. COBOL was for transaction processing, and used on computers built for commercial processing.
After IBM introduced the System/360, a general-purpose computer suitable for both numeric and commercial processing, it introduced PL/1, a general-purpose programming language suitable for numeric and commercial processing. (A very neat symmetry, with general-purpose hardware using a general-purpose programming language.)
PL/1 saw little popularity, but the notion of a general-purpose programming language did gain popularity and still dominates the our mindsets.We view programming languages as rough equals, and the choice of language can be made based on factors such as popularity (a proxy for availability of talent) and tool support (such as IDEs and debuggers).
There are incentives to reinforce the notion that a programming language can do all things. One comes from vendors, another comes from managers, and the third comes from programmers.
Vendors have an incentive to push the notion that a language can do everything -- or at least everything the client needs. Systems from a vendor come with some languages but not all languages. Explaining that your languages can solve problems is good marketing. Explaining that they cannot solve every problem is not.
The managers who purchased computers (which were expensive in the early days) wanted validation of their selection. They wanted to hear that their computer could solve the problems of the business. That meant believing in the flexibility and power of the hardware, and of the programming languages.
The third group of believers is programmers. Learning a programming language takes time. The process is an investment. We programmers want to think that we made a good investment. Admitting that a programming language is not suitable for some tasks means that one may have to learn a different programming language. That's another investment of time. It's easier to convince oneself that the current programming language is capable of everything that is needed.
But different programming languages have different strengths -- and different weaknesses. Programming languages are not the same, and they are not interchangeable.
COBOL is good for transaction processing, especially with flat files. But I would not use it for word processing or games.
FORTRAN is good for numeric processing. But I would not for word processing, nor for transaction processing.
Object oriented languages such as C++, Java, and C# are good for large applications that require structure and behavior that can be defined and verified by the compiler. (Static types and type checking.)
BASIC and Pascal are good for learning the concepts of programming. Both have been expanded in many ways, and have been used for serious development.
R is good for statistics and rapid analysis of numeric data, and the visualization of data. But I would not use it for machine learning.
Perl, Python, and Ruby are good for prototyping and for small- to medium-size applications. I would not use them for large-scale systems.
We should not assume that every language is good for every task, or every purpose. Languages (and their run-time libraries) are complex. They can be measured in multiple dimensions: complexity of the language, memory management and garbage collection, type safety, support tools, library support, connections to data bases and data sources, vendor support, community support, and more. Each language has strengths and weaknesses.
The developer who ranted against the Go language had criticisms about Go's handling of filesystems on Windows. His complaint is not unfounded; Go has a library that expects a Unix-like filesystem and not a Windows filesystem, and works poorly with Windows. But that doesn't mean that the language is useless!
An old joke tells of a man who consults a doctor. The man lifts his arm and says "Doc, it hurts when I do this." The doctor replies, "Well, don't do that!" While it gets laughs, there is some wisdom in it. Go is a poor language for handling the Windows filesystem; don't use it for that.
A more general lesson is this: know your task, and know your programming language. Understand what you want to accomplish, at a fairly detailed level. Learn more than one programming language and recognize that some are better (at certain things) than others. When selecting a programming language, make an informed decision. Don't write off a language forever because it cannot do a specific task, or works poorly for some projects.
The first part of his rant is correct. I disagree with the conclusion.
We have the idea that programming languages are general-purpose, that any language can be used for any problem. But programming didn't start that way. Programming languages started unequal, with different languages designed for different types of computing. And while languages have become less specific and more general-purpose, they are still not equal.
I think our idea about a general-purpose programming language (or perhaps an all-purpose programming language) started with the IBM System/360 in the 1960s and the PL/1 programming language.
Prior to the System/360, computers had specific purposes. Some computers were designed for numeric processing, and others were designed for transaction processing. Not only were computers specific to a purpose, but programming languages were, too. FORTRAN was for numeric processing, and used on computers built for numeric processing. COBOL was for transaction processing, and used on computers built for commercial processing.
After IBM introduced the System/360, a general-purpose computer suitable for both numeric and commercial processing, it introduced PL/1, a general-purpose programming language suitable for numeric and commercial processing. (A very neat symmetry, with general-purpose hardware using a general-purpose programming language.)
PL/1 saw little popularity, but the notion of a general-purpose programming language did gain popularity and still dominates the our mindsets.We view programming languages as rough equals, and the choice of language can be made based on factors such as popularity (a proxy for availability of talent) and tool support (such as IDEs and debuggers).
There are incentives to reinforce the notion that a programming language can do all things. One comes from vendors, another comes from managers, and the third comes from programmers.
Vendors have an incentive to push the notion that a language can do everything -- or at least everything the client needs. Systems from a vendor come with some languages but not all languages. Explaining that your languages can solve problems is good marketing. Explaining that they cannot solve every problem is not.
The managers who purchased computers (which were expensive in the early days) wanted validation of their selection. They wanted to hear that their computer could solve the problems of the business. That meant believing in the flexibility and power of the hardware, and of the programming languages.
The third group of believers is programmers. Learning a programming language takes time. The process is an investment. We programmers want to think that we made a good investment. Admitting that a programming language is not suitable for some tasks means that one may have to learn a different programming language. That's another investment of time. It's easier to convince oneself that the current programming language is capable of everything that is needed.
But different programming languages have different strengths -- and different weaknesses. Programming languages are not the same, and they are not interchangeable.
COBOL is good for transaction processing, especially with flat files. But I would not use it for word processing or games.
FORTRAN is good for numeric processing. But I would not for word processing, nor for transaction processing.
Object oriented languages such as C++, Java, and C# are good for large applications that require structure and behavior that can be defined and verified by the compiler. (Static types and type checking.)
BASIC and Pascal are good for learning the concepts of programming. Both have been expanded in many ways, and have been used for serious development.
R is good for statistics and rapid analysis of numeric data, and the visualization of data. But I would not use it for machine learning.
Perl, Python, and Ruby are good for prototyping and for small- to medium-size applications. I would not use them for large-scale systems.
We should not assume that every language is good for every task, or every purpose. Languages (and their run-time libraries) are complex. They can be measured in multiple dimensions: complexity of the language, memory management and garbage collection, type safety, support tools, library support, connections to data bases and data sources, vendor support, community support, and more. Each language has strengths and weaknesses.
The developer who ranted against the Go language had criticisms about Go's handling of filesystems on Windows. His complaint is not unfounded; Go has a library that expects a Unix-like filesystem and not a Windows filesystem, and works poorly with Windows. But that doesn't mean that the language is useless!
An old joke tells of a man who consults a doctor. The man lifts his arm and says "Doc, it hurts when I do this." The doctor replies, "Well, don't do that!" While it gets laughs, there is some wisdom in it. Go is a poor language for handling the Windows filesystem; don't use it for that.
A more general lesson is this: know your task, and know your programming language. Understand what you want to accomplish, at a fairly detailed level. Learn more than one programming language and recognize that some are better (at certain things) than others. When selecting a programming language, make an informed decision. Don't write off a language forever because it cannot do a specific task, or works poorly for some projects.
Subscribe to:
Posts (Atom)