A note to readers: This post is a bit of a rant, driven by emotion. My 'code stat' project, hosted on Microsoft Azure's web app PaaS platform, has failed and I have yet to find a resolution.
Something has changed in Azure, and I can no longer deploy a new version to the production servers. My code works; I can test it locally. Something in the deployment sequence fails. This is a test project, using the free level of Azure, which means no monthly costs but also means no support -- other than the community help pages.
There are a few glorious advances in IT, advances which stand out above the others. They include the PC revolution (which saw individuals purchasing and using computers), the GUI (which saw people untrained in computer science using computers), and the smartphone (which saw lots more people using computers for lots more sophisticated tasks).
The PC revolution was a big change. Prior to personal computers (whether they were IBM PCs, Apple IIs, or Commodore 64s), computers were large, expensive, and complicated; they were especially difficult to administer. Mainframes and even minicomputers were large and expensive; an individual could afford one if they were an enormously wealthy individual and had lots of time to read manuals and try different configurations to make the thing work.
The consumer PCs changed all of that. They were expensive, but within the range of the middle class. They required little or no administration effort. (The Commodore 64 was especially easy: plug it in, attach to a television, and turn it on.)
Apple made the consumer PC easier to use with the Macintosh. The graphical user interface (lifted from Xerox PARC's Alto, and later copied by Microsoft Windows) made many operations and concepts consistent. Configuration was buried, and sometimes options were reduced to "the way Apple wants you to do it".
It strikes me that cloud computing is in a "mainframe phase". It is large and complex, and while an individual can create a an account (even a free account), the complexity and time necessary to learn and use the platform is significant.
My issue with Microsoft Azure is precisely that. Something has changed and it behaves differently than it did in the past. (It's not my code, the change is in the deployment of my app.) I don't think that I have changed something in Azure's configuration -- although I could have.
The problem is that once you go beyond the 'three easy steps to deploy a web app', Azure is a vast and intimidating beast with lots of settings, each with new terminology. I could poke at various settings, but will that fix the problem or make things worse?
From my view, cloud computing is a large, complex system that requires lots of knowledge and expertise. In other words, it is much like a mainframe. (Except, of course, you don't need a large room dedicated to the equipment.)
The "starter plans" (often free) are not the equivalent of a PC. They are merely the same, enterprise-level plans with certain features turned off.
A PC is different from a mainframe reduced to tabletop size. Both have CPUs and memory and peripheral devices and operating systems, but are two different creatures. PCs have fewer options, fewer settings, fewer things you (the user) can get wrong.
Cloud computing is still at the "mainframe level" of options and settings. It's big and complicated, and it requires a lot of expertise to keep it running.
If we repeat history, we can expect companies to offer smaller, simpler versions of cloud computing. The advantage will be an easier learning curve and less required expertise; the disadvantage will be lower functionality. (Just as minicomputers were easier and less capable than mainframes and PCs were easier and less capable than minicomputers.)
I'll go out on a limb and predict that the companies who offer simpler cloud platforms will not be the current big providers (Amazon.com, Microsoft, Google). Mainframes were challenged by minicomputers from new vendors, not the existing leaders. PCs were initially constructed by hobbyists from kits. Soon after companies such as Radio Shack, Commodore, and the newcomer Apple offered fully-assembled, ready-to-run computers. IBM offered the PC after the success of these upstarts.
The driver for simpler cloud platforms will be cost -- direct and indirect, mostly indirect. The "cloud computing is a mainframe" analogy is not perfect, as the billed costs for cloud platforms can be inexpensive. The expense is not in the hardware, but the time to make the thing work. Current cloud platforms require expertise, and expertise that is not cheap. Companies are willing to pay for that expertise... for now.
I expect that we will see competition to the big cloud platforms, and the marketing will focus on ease of use and low Total Cost of Ownership (TCO). The newcomers will offer simpler clouds, sacrificing performance for reduced administration cost.
My project is currently stuck. Deployments fail, so I cannot update my app. Support is not really available, so I must rely on the limited web pages and perhaps trial and error. I may have to create a new app in Azure and copy my existing code to it. I'm not happy with the experience.
I'm also looking for a simpler cloud platform.
Wednesday, January 24, 2018
Thursday, January 18, 2018
After Agile
The Agile project method was developed as an alternative (one might say, a rebuttal) of Waterfall. Waterfall was first, aside from the proto-process of "do whatever we want" that was used prior to Waterfall. Waterfall had a revolutionary idea: Let's think about what we will do before we do it.
Waterfall can work with small and large projects, and small and large project teams. If offers fixed cost, fixed schedule, and fixed features. Once started, a project plan can be modified, but only with change control, a bureaucratic process to limit changes in addition to broadcasting proposed changes to the entire team.
Agile, its initial incarnation, was for small teams and projects with flexible schedules. Schedule may be fixed, or may be variable; you can deliver a working product at any time. (Although you cannot know in advance which features will be in the delivered product.)
Agile has no no change control process -- or rather, Agile is all about change control, allowing revisions to features at any time. Each iteration (or "sprint", or "cycle") starts with a conversation that involved stakeholders who decide on the next set of features. Waterfall's idea of "think, talk, and agree before we act" is part of Agile.
So we have two methods for managing development projects. But two is an unreasonable number. In the universe, there are rarely two (and only two) of things. Some things, such as electrons and stars and apples, exist in large quantities. Some things, such as the Hope Diamond and our planet's atmosphere, exist as singletons. (A few things do exist in pairs. But the vast majority of objects are either singles or multitudes.)
If software management methods exist as a multitude (for they are clearly not a singleton) then we can expect a third method after Waterfall and Agile. (And a fourth, and a fifth...)
What are the attributes of this new methods? I don't know -- yet. But I have some ideas.
We need a management process for distributed teams, where the participants cannot meet in the same room. This issue is mostly about communication, and it includes differences in time zones.
We need a management process for large systems composed of multiple applications, or "systems of systems". Agile cannot handle projects of this size; waterfall has flaws with it.
Here are some techniques that I think will be in new management methods:
The Waterfall brand was tarnished -- and still is. Few folks want to admit to using Waterfall; they prefer to claim Agile methods. So I'm not expecting a "new Waterfall" method.
Agile's brand is strong; developers want to work on Agile projects and managers want to lead Agile projects. Whatever methods we devise, we will probably call them "Agile". We will use "Distributed Agile" for distributed teams, "Large Agile" for large teams, and maybe "Layered Agile" for systems of systems.
Or maybe we will use other terms. If Agile falls out of favor, then we will pick a different term, such as "Coordinated".
Regardless of the names, I'm looking forward to new project management methods.
Waterfall can work with small and large projects, and small and large project teams. If offers fixed cost, fixed schedule, and fixed features. Once started, a project plan can be modified, but only with change control, a bureaucratic process to limit changes in addition to broadcasting proposed changes to the entire team.
Agile, its initial incarnation, was for small teams and projects with flexible schedules. Schedule may be fixed, or may be variable; you can deliver a working product at any time. (Although you cannot know in advance which features will be in the delivered product.)
Agile has no no change control process -- or rather, Agile is all about change control, allowing revisions to features at any time. Each iteration (or "sprint", or "cycle") starts with a conversation that involved stakeholders who decide on the next set of features. Waterfall's idea of "think, talk, and agree before we act" is part of Agile.
So we have two methods for managing development projects. But two is an unreasonable number. In the universe, there are rarely two (and only two) of things. Some things, such as electrons and stars and apples, exist in large quantities. Some things, such as the Hope Diamond and our planet's atmosphere, exist as singletons. (A few things do exist in pairs. But the vast majority of objects are either singles or multitudes.)
If software management methods exist as a multitude (for they are clearly not a singleton) then we can expect a third method after Waterfall and Agile. (And a fourth, and a fifth...)
What are the attributes of this new methods? I don't know -- yet. But I have some ideas.
We need a management process for distributed teams, where the participants cannot meet in the same room. This issue is mostly about communication, and it includes differences in time zones.
We need a management process for large systems composed of multiple applications, or "systems of systems". Agile cannot handle projects of this size; waterfall has flaws with it.
Here are some techniques that I think will be in new management methods:
- Automated testing
- Automated deployment with automated roll-back
- Automated evaluation of source code (lint, Robocop, etc.)
- Automated recording (and transcribing) of meetings and conversations
The Waterfall brand was tarnished -- and still is. Few folks want to admit to using Waterfall; they prefer to claim Agile methods. So I'm not expecting a "new Waterfall" method.
Agile's brand is strong; developers want to work on Agile projects and managers want to lead Agile projects. Whatever methods we devise, we will probably call them "Agile". We will use "Distributed Agile" for distributed teams, "Large Agile" for large teams, and maybe "Layered Agile" for systems of systems.
Or maybe we will use other terms. If Agile falls out of favor, then we will pick a different term, such as "Coordinated".
Regardless of the names, I'm looking forward to new project management methods.
Monday, January 1, 2018
Predictions for tech in 2018
Predictions are fun! Let's have some for the new year!
Programming Languages
Java, C, and C# will remain the most popular languages, especially in large commercial efforts. Moderately popular languages such as Python and JavaScript will remain moderately popular. (JavaScript is one of the "three legs of web pages", along with HTML and CSS, so it is very popular for web page and front-end work.)
Interest in functional programming languages (Haskell, Erlang) will remain minimal, while I expect interest in Rust (which focuses on safety, speed, and concurrency) to increase.
Cloud and Mobile
The year 2017 was the year that cloud computing become the default for new applications, especially business applications. The platforms and tools available from the big providers (Amazon.com, Microsoft, Google, and IBM) make a convincing case. Building traditional web applications on in-house data centers will still be used for some specialty applications.
The front end for applications remains split between browsers and mobile devices. Mobile devices are the platform of choice for consumer applications, including banking, sales, games, and e-mail. Browsers are the platform of choice for internal commercial applications, which require larger screens.
Browsers
Chrome will remain the dominant browser, possibly gaining market share. Microsoft will continue to support its Edge browser, and it has the resources to keep it going. Other browsers such as Firefox and Opera will be hard-pressed to maintain viability.
PaaS (Platform as a Service)
The middle version of platforms for cloud computing, PaaS sits between IaaS (Infrastructure as a Service) and SaaS (Software as a Service). It offers a platform to run applications, handling the underlying operating system, database, and messaging layers and keeping them hidden from the developer.
I expect an increase in interest in these platforms, driven by the increase in cloud-based apps. PaaS removes a lot of administrative work, for development and deployment.
AI and ML (Artificial Intelligence and Machine Learning)
Most of AI is actually ML, but the differences are technical and obscure. The term "AI" has achieved critical mass, and that's what we'll use, even when we're talking about Machine Learning.
Interest in AI will remain high, and companies with large data sets will take advantage of it. Initial applications will include credit analysis and fraud analysis (such applications are already under development). The platforms offered by Google, Microsoft, and IBM (and others) will make experimentation with AI possible for many, although one needs large data sets in addition to the AI compute platform.
Containers
Interest in containers will remain strong. Containers ease deployment; if you deploy frequently (or even infrequently) you will want to at least evaluate them.
Big Data
The term "Big Data" will all but disappear in 2018. Like its predecessor "real time", it was a vague description of computing that was beyond the reach of typical (at the time) hardware and software. Hardware and software improved to the point that performance was good enough, and the term "real time" is now limited to a few very specialized situations. I expect the same for "big data".
Related terms, like "data science" and "analytics" will remain. Their continued existence will depend on their perceived value to organizations; I think the latter has secured a place, the former is still under scrutiny.
IoT
The "Internet of Things" will see a lot of hype in 2018. I expect a lot of internet-connected devices, from drones to dolls, from cameras to cars, and from bicycles to birdcages (really!).
The technology for connected devices has gotten ahead of our understanding, much like the original microcomputers before the IBM PC.
We don't know how to use connected things -- yet. I expect that we will experiment with a lot of uses before we find the "killer app" of IoT. Once we do, I expect that we will see a standardization of protocols for IoT devices, making the early devices obsolete.
Apple
I expect Apple to have a successful and profitable 2018. They remain, in my opinion, at risk of becoming the "iPhone company", with more than 80% of the income coming from phones. The other risk is from their aversion to cloud computing -- Apple puts compute power in its devices (laptops, tablets, phones, and watches) and does not leverage or offer cloud services.
The latter omission (lack of cloud services) will be a serious problem in the future. The other providers (Microsoft, Google, IBM, etc.) provide cloud services and development platforms. Apple stands alone, keeping developers on the local device and using cloud computing for its internal use.
These are my predictions for 2018. In short, I expect a rather dull year, focused more on exploring our current technology than creating new tech. We've got a lot of relatively new tech toys to play with, and they should keep us occupied for a while.
Of course, I could be wrong!
Programming Languages
Java, C, and C# will remain the most popular languages, especially in large commercial efforts. Moderately popular languages such as Python and JavaScript will remain moderately popular. (JavaScript is one of the "three legs of web pages", along with HTML and CSS, so it is very popular for web page and front-end work.)
Interest in functional programming languages (Haskell, Erlang) will remain minimal, while I expect interest in Rust (which focuses on safety, speed, and concurrency) to increase.
Cloud and Mobile
The year 2017 was the year that cloud computing become the default for new applications, especially business applications. The platforms and tools available from the big providers (Amazon.com, Microsoft, Google, and IBM) make a convincing case. Building traditional web applications on in-house data centers will still be used for some specialty applications.
The front end for applications remains split between browsers and mobile devices. Mobile devices are the platform of choice for consumer applications, including banking, sales, games, and e-mail. Browsers are the platform of choice for internal commercial applications, which require larger screens.
Browsers
Chrome will remain the dominant browser, possibly gaining market share. Microsoft will continue to support its Edge browser, and it has the resources to keep it going. Other browsers such as Firefox and Opera will be hard-pressed to maintain viability.
PaaS (Platform as a Service)
The middle version of platforms for cloud computing, PaaS sits between IaaS (Infrastructure as a Service) and SaaS (Software as a Service). It offers a platform to run applications, handling the underlying operating system, database, and messaging layers and keeping them hidden from the developer.
I expect an increase in interest in these platforms, driven by the increase in cloud-based apps. PaaS removes a lot of administrative work, for development and deployment.
AI and ML (Artificial Intelligence and Machine Learning)
Most of AI is actually ML, but the differences are technical and obscure. The term "AI" has achieved critical mass, and that's what we'll use, even when we're talking about Machine Learning.
Interest in AI will remain high, and companies with large data sets will take advantage of it. Initial applications will include credit analysis and fraud analysis (such applications are already under development). The platforms offered by Google, Microsoft, and IBM (and others) will make experimentation with AI possible for many, although one needs large data sets in addition to the AI compute platform.
Containers
Interest in containers will remain strong. Containers ease deployment; if you deploy frequently (or even infrequently) you will want to at least evaluate them.
Big Data
The term "Big Data" will all but disappear in 2018. Like its predecessor "real time", it was a vague description of computing that was beyond the reach of typical (at the time) hardware and software. Hardware and software improved to the point that performance was good enough, and the term "real time" is now limited to a few very specialized situations. I expect the same for "big data".
Related terms, like "data science" and "analytics" will remain. Their continued existence will depend on their perceived value to organizations; I think the latter has secured a place, the former is still under scrutiny.
IoT
The "Internet of Things" will see a lot of hype in 2018. I expect a lot of internet-connected devices, from drones to dolls, from cameras to cars, and from bicycles to birdcages (really!).
The technology for connected devices has gotten ahead of our understanding, much like the original microcomputers before the IBM PC.
We don't know how to use connected things -- yet. I expect that we will experiment with a lot of uses before we find the "killer app" of IoT. Once we do, I expect that we will see a standardization of protocols for IoT devices, making the early devices obsolete.
Apple
I expect Apple to have a successful and profitable 2018. They remain, in my opinion, at risk of becoming the "iPhone company", with more than 80% of the income coming from phones. The other risk is from their aversion to cloud computing -- Apple puts compute power in its devices (laptops, tablets, phones, and watches) and does not leverage or offer cloud services.
The latter omission (lack of cloud services) will be a serious problem in the future. The other providers (Microsoft, Google, IBM, etc.) provide cloud services and development platforms. Apple stands alone, keeping developers on the local device and using cloud computing for its internal use.
These are my predictions for 2018. In short, I expect a rather dull year, focused more on exploring our current technology than creating new tech. We've got a lot of relatively new tech toys to play with, and they should keep us occupied for a while.
Of course, I could be wrong!
Sunday, December 17, 2017
Single point of failure
If you're going to have a single point of failure, make it replaceable.
We strive to avoid single points of failure. They hold risk -- if a single point of failure fails, then the entire system fails.
It is not always possible to avoid a single point of failure. Sometimes the constraint is cost. Other times the design requires a single component for a function.
If you have a single point of failure, make it easy to replace. Design the component so that you can replace it quickly and with little risk. When it fails, you can respond and install the replacement component. (Kind of like a spare tire on an automobile. Although the four tires on a car are not a single point of failure, because there are four of them. But you get the idea.)
A simple design for a single point of failure (or any component) requires care and attention. You have to design the component with minimal functionality. Move what you can to other, redundant components.
You also have to guard against changes to the simplicity. Over time, designs change. People add to designs. They want new features, or extensions to existing features. Watch for changes that complicate the single point of failure. Add them to other, redundant components in the system.
We strive to avoid single points of failure. They hold risk -- if a single point of failure fails, then the entire system fails.
It is not always possible to avoid a single point of failure. Sometimes the constraint is cost. Other times the design requires a single component for a function.
If you have a single point of failure, make it easy to replace. Design the component so that you can replace it quickly and with little risk. When it fails, you can respond and install the replacement component. (Kind of like a spare tire on an automobile. Although the four tires on a car are not a single point of failure, because there are four of them. But you get the idea.)
A simple design for a single point of failure (or any component) requires care and attention. You have to design the component with minimal functionality. Move what you can to other, redundant components.
You also have to guard against changes to the simplicity. Over time, designs change. People add to designs. They want new features, or extensions to existing features. Watch for changes that complicate the single point of failure. Add them to other, redundant components in the system.
Tuesday, December 12, 2017
Do you want to be time and on budget, or do you want a better product?
Project management in IT is full of options, opinions, and arguments. Yet one thing that just about everyone agrees on is this: a successful development project must have a clear vision of the product (the software) and everyone the team has to understand that vision.
I'm not sure that I agree with that idea. But my explanation will be a bit lengthy.
I'll start with a summary of one of my projects: a BASIC interpreter.
* * * * *
It started with the development of an interpreter. My goal was not to build a BASIC interpreter, but to learn the Ruby programming language. I had built some small programs in Ruby, and I needed a larger, more ambitious project to learn the language in depth. (I learn by doing. Actually, I learn by making mistakes, and then fixing the mistakes. So an ambitious project was an opportunity to make mistakes.)
My initial clear vision of the product was just that: clear. I was to build a working interpreter for the BASIC language, implementing BASIC as described in a 1965 text by Kemeny and Kurtz (the authors of BASIC). That version had numeric variables but not text (string) variables. The lack of string variables simplified several aspects of the project, from parsing to execution. But the project was not trivial; there were some interesting aspects of a numeric-only BASIC language, including matrix operations and output formatting.
After some effort (and lots of mistakes), I had a working interpreter. It really ran BASIC! I could enter the programs from the "BASIC Programming" text, run them, and see the results!
The choice of Kemeny and Kurtz' "BASIC Programming" was fortuitous. It contains a series of programs, starting with simple ones and working up to complex programs, and it shows the output of each. I could build a very simple interpreter to run the initial programs, and then expand it gradually as I worked my way through the text. At each step I could check my work against the provided output.
Then things became interesting. After I had the interpreter working, I forked the source code and created a second interpreter that included string variables. A second interpreter was not part of my initial vision, and some might consider this change "scope creep". It is a valid criticism, because I was expanding the scope of the product.
Yet I felt that the expansion of features, the processing of string variables, was worth the effort. In my mind, there may be someone who wants a BASIC interpreter. (Goodness knows why, but perhaps they do.) If so, they most likely want a version that can handle string variables.
My reasoning wasn't "the product needs this feature to be successful"; it was "users of the product will find this feature helpful". I was making the lives of (possibly imaginary) users easier.
I had to find a different reference for my tests. "BASIC Programming" said nothing about string variables. So off I went, looking for old texts on BASIC. And I found them! I found three useful texts: Coan's "Basic BASIC", Tracton's "57 Practical Programs and Games", and David Ahl's "101 BASIC Computer Games".
And I was successful at adding string variables to the interpreter.
Things had become interesting (from a project management perspective) with the scope expansion for an interpreter that had string variables. And things stayed interesting: I kept expanding the scope. As I worked on a feature, I thought about new, different features. As I did, I noted them and kept working on the current feature. When I finished one feature, I started another.
I added statements to process arrays of data. BASIC can process individual variables (scalars) and matrices. I extended the definition of BASIC and created new statements to process arrays. (BASICs that handle matrices often process arrays as degenerate forms of matrices. The internal structure of the interpreter made it easy to add statements specific to arrays.)
I added statements to read and write files. This of course required statements to open and close files. These were a little challenging to create, but not that hard. Most of the work had already been done with the input-output processing for the console.
I added a trace option, to see each line as it executed. I found it useful for debugging. Using my logic from the expansion for string variables, if I found it useful then other users would (possibly) find it useful. And adding it was a simple operation: the interpreter was already processing each line, and all I had to do was add some logic to display the line as it was interpreted.
I added a profiler, to count and time the execution of each line of code. This helped me reduce the run-time of programs, by identifying inefficient areas of the code. This was also easy to add, as the interpreter was processing each line. I simply added a counter to each line's internal data, and incremented the counted when the line was executed.
Then I added a cross-reference command, which lists variables, functions, and constants, and the lines in which they appear. I use this to identify errors. For example, a variable that appears in one line (and only one line) is probably an error. It is either initialized without being used, or used without being initialized.
I decided to add a debugger. A debugger is exactly like trace mode, with the option to enter a command after each statement. This feature, too, helps the typical user.
* * * * *
Stepping back from the project, we can see that the end result (two interpreters each with profiler, cross-reference, trace mode, and debugger) is quite far from the initial vision of a simple interpreter for BASIC.
According to the predominant thinking in project management, my project is a failure. It delivered a product with many more features than initially planned, and it consumed more time than planned, two sins of project management.
Yet for me, the project is a success. First, I learned quite a bit about the Ruby programming language. Second -- and perhaps more important -- the product is much more capable and serves the user better.
* * * * *
This experience shows a difference in project management. As a side project, one without a firm budget or deadline, it was successful. The final product is much more capable that the initial vision. But more importantly, my motivation was to provide a better experience for the user.
That result is not desired in corporate software. Oh, I'm sure that corporate managers will quickly claim that they deliver a better experience to their customers. But they will do it only when their first priority has been met: profits for the corporation. And for those, the project must fit within expense limits and time limits. Thus, a successful corporate project delivers the initial vision on time and on budget -- not an expanded version that is late and over budget.
I'm not sure that I agree with that idea. But my explanation will be a bit lengthy.
I'll start with a summary of one of my projects: a BASIC interpreter.
* * * * *
It started with the development of an interpreter. My goal was not to build a BASIC interpreter, but to learn the Ruby programming language. I had built some small programs in Ruby, and I needed a larger, more ambitious project to learn the language in depth. (I learn by doing. Actually, I learn by making mistakes, and then fixing the mistakes. So an ambitious project was an opportunity to make mistakes.)
My initial clear vision of the product was just that: clear. I was to build a working interpreter for the BASIC language, implementing BASIC as described in a 1965 text by Kemeny and Kurtz (the authors of BASIC). That version had numeric variables but not text (string) variables. The lack of string variables simplified several aspects of the project, from parsing to execution. But the project was not trivial; there were some interesting aspects of a numeric-only BASIC language, including matrix operations and output formatting.
After some effort (and lots of mistakes), I had a working interpreter. It really ran BASIC! I could enter the programs from the "BASIC Programming" text, run them, and see the results!
The choice of Kemeny and Kurtz' "BASIC Programming" was fortuitous. It contains a series of programs, starting with simple ones and working up to complex programs, and it shows the output of each. I could build a very simple interpreter to run the initial programs, and then expand it gradually as I worked my way through the text. At each step I could check my work against the provided output.
Then things became interesting. After I had the interpreter working, I forked the source code and created a second interpreter that included string variables. A second interpreter was not part of my initial vision, and some might consider this change "scope creep". It is a valid criticism, because I was expanding the scope of the product.
Yet I felt that the expansion of features, the processing of string variables, was worth the effort. In my mind, there may be someone who wants a BASIC interpreter. (Goodness knows why, but perhaps they do.) If so, they most likely want a version that can handle string variables.
My reasoning wasn't "the product needs this feature to be successful"; it was "users of the product will find this feature helpful". I was making the lives of (possibly imaginary) users easier.
I had to find a different reference for my tests. "BASIC Programming" said nothing about string variables. So off I went, looking for old texts on BASIC. And I found them! I found three useful texts: Coan's "Basic BASIC", Tracton's "57 Practical Programs and Games", and David Ahl's "101 BASIC Computer Games".
And I was successful at adding string variables to the interpreter.
Things had become interesting (from a project management perspective) with the scope expansion for an interpreter that had string variables. And things stayed interesting: I kept expanding the scope. As I worked on a feature, I thought about new, different features. As I did, I noted them and kept working on the current feature. When I finished one feature, I started another.
I added statements to process arrays of data. BASIC can process individual variables (scalars) and matrices. I extended the definition of BASIC and created new statements to process arrays. (BASICs that handle matrices often process arrays as degenerate forms of matrices. The internal structure of the interpreter made it easy to add statements specific to arrays.)
I added statements to read and write files. This of course required statements to open and close files. These were a little challenging to create, but not that hard. Most of the work had already been done with the input-output processing for the console.
I added a trace option, to see each line as it executed. I found it useful for debugging. Using my logic from the expansion for string variables, if I found it useful then other users would (possibly) find it useful. And adding it was a simple operation: the interpreter was already processing each line, and all I had to do was add some logic to display the line as it was interpreted.
I added a profiler, to count and time the execution of each line of code. This helped me reduce the run-time of programs, by identifying inefficient areas of the code. This was also easy to add, as the interpreter was processing each line. I simply added a counter to each line's internal data, and incremented the counted when the line was executed.
Then I added a cross-reference command, which lists variables, functions, and constants, and the lines in which they appear. I use this to identify errors. For example, a variable that appears in one line (and only one line) is probably an error. It is either initialized without being used, or used without being initialized.
I decided to add a debugger. A debugger is exactly like trace mode, with the option to enter a command after each statement. This feature, too, helps the typical user.
* * * * *
Stepping back from the project, we can see that the end result (two interpreters each with profiler, cross-reference, trace mode, and debugger) is quite far from the initial vision of a simple interpreter for BASIC.
According to the predominant thinking in project management, my project is a failure. It delivered a product with many more features than initially planned, and it consumed more time than planned, two sins of project management.
Yet for me, the project is a success. First, I learned quite a bit about the Ruby programming language. Second -- and perhaps more important -- the product is much more capable and serves the user better.
* * * * *
This experience shows a difference in project management. As a side project, one without a firm budget or deadline, it was successful. The final product is much more capable that the initial vision. But more importantly, my motivation was to provide a better experience for the user.
That result is not desired in corporate software. Oh, I'm sure that corporate managers will quickly claim that they deliver a better experience to their customers. But they will do it only when their first priority has been met: profits for the corporation. And for those, the project must fit within expense limits and time limits. Thus, a successful corporate project delivers the initial vision on time and on budget -- not an expanded version that is late and over budget.
Friday, December 8, 2017
The cult of fastest
In IT, we (well, some of us) are obsessed with speed. The speed-cravers seek the fastest hardware, the fastest software, and the fastest network connections. They have been with us since the days of the IBM PC AT, which ran at 6MHz which was faster than the IBM PC (and XT) speed of 4.77MHz.
Now we see speed competition among browsers. First Firefox claims their browser is fastest. Then Google releases a new version of Chrome, and claims that it is the fastest. At some point, Microsoft will claim that their Edge browser is the fastest.
It is one thing to improve performance. When faced with a long-running job, we want the computer to be faster. That makes sense; we get results quicker and we can take actions faster. Sometimes it is reasonable to go to great lengths to improve performance.
I once had a job that compared source files for duplicate code. With 10,000 source files, and the need to compare each file against each other file, there were 1,000,000 comparisons. Each comparison took about a minute, so the total job was projected to run for 1,000,000 minutes -- or about 2 years! I revised the job significantly, using a simpler (and faster) comparison to identify if two files had any common lines of code and then using the more detailed (and longer) comparison on only those pairs with over 1,000 lines of common code.
I once had a job that compared source files for duplicate code. With 10,000 source files, and the need to compare each file against each other file, there were 1,000,000 comparisons. Each comparison took about a minute, so the total job was projected to run for 1,000,000 minutes -- or about 2 years! I revised the job significantly, using a simpler (and faster) comparison to identify if two files had any common lines of code and then using the more detailed (and longer) comparison on only those pairs with over 1,000 lines of common code.
Looking for faster processing in that case made sense.
But it is another thing to look for faster processing by itself.
Consider a word processor. Microsoft Word has been around for decades. (It actually started its life in MS-DOS.) Word was designed for systems with much smaller memory and much slower processors, and it still has some of that design. The code for Word is efficient. It spends most of its time not in processing words but in waiting for the user to type a key or click the mouse. Making the code twice as fast would not improve its performance (much), because the slowness comes from the user.
E-mail is another example. Most of the time for e-mail is, like Word, the computer waiting for the user to type something. When an e-mail is sent, the e-mail is passed from one e-mail server to another until it arrives at the assigned destination. Changing the servers would let the e-mail arrive quicker, but it doesn't help with the composition. The acts of writing and reading the e-mail are based on the human brain and physiology; faster processors won't help.
The pursuit of faster processing without definite benefits is, ironically, a waste of time.
Instead of blindly seeking faster hardware and software, we should think about what we want. We should identify the performance improvements that will benefit us. (For managers, this means lower cost or less time to obtain business results.)
Once we insist on benefits for improved performance, we find a new concept: the idea of "fast enough". When an improvement lets us meet a goal (a goal more specific than "go faster"), we can justify the effort or expense for faster performance. But once we meet that goal, we stop.
This is a useful tool. It allows us to eliminate effort and focus on changes that will help us. If we decide that our internet service is fast enough, then we can look at other things such as database and compilers. If we decide that our systems are fast enough, then we can look at security.
Which is not to say that we should simply declare our systems "fast enough" and ignore them. The decision should be well-considered, especially in the light of our competitors and their capabilities. The conditions that let us rate our systems as "fast enough" today may not hold in the future, so a periodic review is prudent.
We shouldn't ignore opportunities to improve performance. But we shouldn't spend all of our effort for them and avoid other things. We shouldn't pick a solution because it is the fastest. A solution that is "fast enough" is, at the end of the day, fast enough.
Tuesday, November 28, 2017
Root with no password
Apple made the news today, and not in a good way. It seems that their latest version of macOS, "High Sierra", allows anyone to sit at a machine and gain access to administrative functions (guarded by a name-and-password dialog) and enter the name "root" and a password of ... nothing.
This behavior in macOS is not desired, and this "bug" is severe. (Perhaps the most severe defect I have seen in the industry -- and I started prior to Windows and MS-DOS, with CP/M and other operating systems.) But my point here is not to bash Apple.
My point is this: The three major operating systems for desktop and laptop computers (Windows, macOS, and Linux) are all very good, and none are perfect.
Decades ago, Apple had superior reliability and immunity from malware. That immunity was due in part to the design of macOS and in part to Apple's small market share. (Microsoft Windows was a more tempting target.) Those conditions have changed. Microsoft has improved Windows. Malware now targets macOS in addition to Windows. (And some targets Linux.)
Each of Windows, macOS, and Linux have strengths, and each have areas of improvement. Microsoft Windows has excellent support, good office tools, and good development tools. Apple's macOS has a (slightly) better user interface but a shorter expected lifespan. (Apple retires old hardware and software more quickly than Microsoft.) Linux is reliable, has lots of support, and many tools are available for free; you have more work configuring it and you must become (or hire) a system administrator.
If you choose your operating system based on the idea that it is better than the others, that it is superior to the other choices, then you are making a mistake -- possibly larger than Apple's goof. Which is best for you depends on the tasks you intend to perform.
So think before you choose. Understand the differences. Understand your use cases. Don't simply pick Microsoft because the competition is using it. Don't pick Apple because the screen looks "cool". Don't pick Linux because you want to be a rebel.
Instead, pick Microsoft when the tools for Windows are a good match for your team and your plans. Or pick macOS because you're working on iPhone apps. Or pick Linux because your team has experience with Linux and your product or service will run on Linux and serve your customers.
Think before you choose.
This behavior in macOS is not desired, and this "bug" is severe. (Perhaps the most severe defect I have seen in the industry -- and I started prior to Windows and MS-DOS, with CP/M and other operating systems.) But my point here is not to bash Apple.
My point is this: The three major operating systems for desktop and laptop computers (Windows, macOS, and Linux) are all very good, and none are perfect.
Decades ago, Apple had superior reliability and immunity from malware. That immunity was due in part to the design of macOS and in part to Apple's small market share. (Microsoft Windows was a more tempting target.) Those conditions have changed. Microsoft has improved Windows. Malware now targets macOS in addition to Windows. (And some targets Linux.)
Each of Windows, macOS, and Linux have strengths, and each have areas of improvement. Microsoft Windows has excellent support, good office tools, and good development tools. Apple's macOS has a (slightly) better user interface but a shorter expected lifespan. (Apple retires old hardware and software more quickly than Microsoft.) Linux is reliable, has lots of support, and many tools are available for free; you have more work configuring it and you must become (or hire) a system administrator.
If you choose your operating system based on the idea that it is better than the others, that it is superior to the other choices, then you are making a mistake -- possibly larger than Apple's goof. Which is best for you depends on the tasks you intend to perform.
So think before you choose. Understand the differences. Understand your use cases. Don't simply pick Microsoft because the competition is using it. Don't pick Apple because the screen looks "cool". Don't pick Linux because you want to be a rebel.
Instead, pick Microsoft when the tools for Windows are a good match for your team and your plans. Or pick macOS because you're working on iPhone apps. Or pick Linux because your team has experience with Linux and your product or service will run on Linux and serve your customers.
Think before you choose.
Subscribe to:
Posts (Atom)