I like the Ruby programming language. I've been using it for several projects, including an interpreter for the BASIC language. (The interpreter for BASIC was an excuse to do something in Ruby and learn about the language.)
My experience with Ruby has been a good one. I find that the language lets me do what I need, and often very quickly. The included classes are well-designed and include the functions I need. From time to time I do have to add some obscure capability, but those are rare.
Yet Ruby has a sharp edge to it, an aspect that can cause trouble if you fail to pay attention.
That aspect is one of its chief features: flexibility.
Let me explain.
Ruby is an object-oriented language, which means the programmer can define classes. Each class has a name, some functions, and usually some private data. You can do quite a bit with class definitions, including class variables, class instance variables, and mix-ins to implement features in multiple classes.
You can even modify existing classes, simply by declaring the same class name and defining new functions. Ruby accepts a second definition of a class and merges it into the first definition, quietly and efficiently. And that's the sharp edge.
This "sharp edge" cut me when I wasn't expecting it. I was working on my BASIC interpreter, and had just finished a class called "Matrix", which implemented matrix operations within the language. My next enhancement was for array operations (a matrix being a two-dimensional structure and an array being a one-dimensional structure).
I defined a class called "Array" and defined some functions, including a "to_s" function. (The name "to_s" is the Ruby equivalent of "ToString()" or "to_string()" in other languages.)
And my code behaved oddly. Existing functions, having nothing to do with arrays or my Array class, broke.
Experienced Ruby programmers are probably chuckling at this description, knowing the problem.
Ruby has its own Array class, and my Array class was not a new class but a modification of the existing, built-in class named "Array". My program, in actuality, was quite different from my imagined idea. When I defined the function "to_s" in "my" Array class, I was actually overwriting the existing "to_s" function in the Ruby-supplied Array class. And that happened quietly and efficiently -- no warning, no error, no information message.
Part of this problem is my fault. I was not on my guard against such a problem. But part of the problem, I believe, is Ruby's -- specifically the design of Ruby. Letting one easily modify an existing class, with no warning, is dangerous. And I say this not simply due to my background with languages that use static checking.
My error aside, I can think of two situations in which this can be a problem. The first is when a new version of the Ruby language (and its system libraries) are released. Are there new classes defined in the libraries? Could the names of those classes duplicate any names I have used in my project? For example, will Ruby one day come with a class named "Matrix"? If it does, it will collide with my class named "Matrix". How will I know that there is a duplicate name?
The second situation is on a project with multiple developers. What happens if two developers create classes with the same name? Will they know? Or will they have to wait for something "weird" to happen?
Ruby has some mechanisms to prevent this problem. One can use namespaces within the Ruby language, to prevent such name conflicts. A simple grep of the code, looking for "class [A-Z][\w]" and then a sort will identify duplicate names. But these solutions require discipline and will -- they don't come "for free".
As I said earlier, this is a sharp edge to Ruby. Is it a defect? No, I think this is the expected behavior for the language. It's not a defect. But it is an aspect of the language, and one that may limit the practicality of Ruby on large applications.
I started this blog with the statement that I like Ruby. I still like Ruby. It has a sharp edge (like all useful tools) and I think that we should be aware of it.
Tuesday, July 18, 2017
Sunday, July 9, 2017
Cloud and optimizations
We all recognize that cloud computing is different.
It may be that cloud computing breaks some of our algorithms.
A colleague of mine, a long time ago, shared a story about programming early IBM mainframes. They used assembly language, because code written in assembly executed faster than code written in COBOL. (And for business applications on IBM mainframes, at the time, those were the only two options.)
Not only did they write in assembly language, they wrote code to be fast. That is, they "optimized" the code. One of the optimizations was with the "multiply" instruction.
The multiply instruction does what you think: it multiplies to numbers and stores the result. To optimize it, they wrote the code to place the larger of the two values in one register and the smaller of the two values in the other register. The multiply instruction was implemented as a "repeated addition" operation, so the second register was really a count of the number of addition operations that would be performed. By storing the smaller number in the second register, programmers reduced the number of "add" operations and improved performance.
(Technically inclined folks may balk at the notion of reducing a multiply operation to repeated additions, and observe that it works for integer values but not floating-point values. The technique was valid on early IBM equipment, because the numeric values were either integers or fixed-point values, not floating-point values.)
It was an optimization that was useful at the time, when computers were relatively slow and relatively expensive. Today's faster, cheaper computers can perform multiplication quite quickly, and we don't need to optimize it.
Over time, changes in technology make certain optimizations obsolete.
Which brings us to cloud computing.
Cloud computing is a change in technology. It makes available a variable number of processors.
Certain problems have a large number of possible outcomes, with only certain outcomes considered good. The problems could describe the travels of a salesman, or the number of items in a sack, or playing a game of checkers. We have algorithms to solve specific configurations of these problems.
One algorithm is the brute-force, search-every-possibility method, which does just what you think. While it is guaranteed to find an optimal solution, there are sometimes so many solutions (millions upon millions, or billions, or quintillions) that this method is impractical.
Faced with an impractical algorithm, we invent others. Many are iterative algorithms which start with a set of conditions and then move closer and closer to a solution by making adjustments to the starting conditions. Other algorithms discard certain possibilities ("pruning") which are known to be no better than current solutions. Both techniques reduce the number of tested possibilities and therefore reduce the time to find a solution.
But observe: The improved algorithms assume a set of sequential operations. They are designed for a single computer (or a single person), and they are designed to minimize time.
With cloud computing, we no longer have a single processor. We have multiple processors, each operating in parallel. Algorithms designed to optimize for time on a single processor may not be suitable for cloud computing.
Instead of using one processor to iteratively find a solution, it may be possible to harness thousands (millions?) of cloud-based processors, each working on a distinct configuration. Instead of examining solutions in sequence, we can examine solutions in parallel. The result may be a faster solution to the problem, in terms of "wall time" -- the time we humans are waiting for the solution.
I recognize that this approach has its costs. Cloud computing is not free, in terms of money or in terms of computing time. Money aside, there is a cost in creating the multiple configurations, sending them to respecting cloud processors, and then comparing the many results. That time is a cost, and it must be included in our evaluation.
None of these ideas are new to the folks who have been working with parallel processing. There are studies, papers, and ideas, most of which have been ignored by mainstream (sequential) computing.
Cloud computing will lead, I believe, to the re-evaluation of many of our algorithms. We may find that many of them have a built-in bias for single-processor operation. The work done in parallel computing will be pertinent to cloud computing.
Cloud computing is a very different form of computing. We're still learning about it. The application of concepts from parallel processing is one aspect of it. I won't be surprised if there are more. There may be all sorts of surprises ahead of us.
It may be that cloud computing breaks some of our algorithms.
A colleague of mine, a long time ago, shared a story about programming early IBM mainframes. They used assembly language, because code written in assembly executed faster than code written in COBOL. (And for business applications on IBM mainframes, at the time, those were the only two options.)
Not only did they write in assembly language, they wrote code to be fast. That is, they "optimized" the code. One of the optimizations was with the "multiply" instruction.
The multiply instruction does what you think: it multiplies to numbers and stores the result. To optimize it, they wrote the code to place the larger of the two values in one register and the smaller of the two values in the other register. The multiply instruction was implemented as a "repeated addition" operation, so the second register was really a count of the number of addition operations that would be performed. By storing the smaller number in the second register, programmers reduced the number of "add" operations and improved performance.
(Technically inclined folks may balk at the notion of reducing a multiply operation to repeated additions, and observe that it works for integer values but not floating-point values. The technique was valid on early IBM equipment, because the numeric values were either integers or fixed-point values, not floating-point values.)
It was an optimization that was useful at the time, when computers were relatively slow and relatively expensive. Today's faster, cheaper computers can perform multiplication quite quickly, and we don't need to optimize it.
Over time, changes in technology make certain optimizations obsolete.
Which brings us to cloud computing.
Cloud computing is a change in technology. It makes available a variable number of processors.
Certain problems have a large number of possible outcomes, with only certain outcomes considered good. The problems could describe the travels of a salesman, or the number of items in a sack, or playing a game of checkers. We have algorithms to solve specific configurations of these problems.
One algorithm is the brute-force, search-every-possibility method, which does just what you think. While it is guaranteed to find an optimal solution, there are sometimes so many solutions (millions upon millions, or billions, or quintillions) that this method is impractical.
Faced with an impractical algorithm, we invent others. Many are iterative algorithms which start with a set of conditions and then move closer and closer to a solution by making adjustments to the starting conditions. Other algorithms discard certain possibilities ("pruning") which are known to be no better than current solutions. Both techniques reduce the number of tested possibilities and therefore reduce the time to find a solution.
But observe: The improved algorithms assume a set of sequential operations. They are designed for a single computer (or a single person), and they are designed to minimize time.
With cloud computing, we no longer have a single processor. We have multiple processors, each operating in parallel. Algorithms designed to optimize for time on a single processor may not be suitable for cloud computing.
Instead of using one processor to iteratively find a solution, it may be possible to harness thousands (millions?) of cloud-based processors, each working on a distinct configuration. Instead of examining solutions in sequence, we can examine solutions in parallel. The result may be a faster solution to the problem, in terms of "wall time" -- the time we humans are waiting for the solution.
I recognize that this approach has its costs. Cloud computing is not free, in terms of money or in terms of computing time. Money aside, there is a cost in creating the multiple configurations, sending them to respecting cloud processors, and then comparing the many results. That time is a cost, and it must be included in our evaluation.
None of these ideas are new to the folks who have been working with parallel processing. There are studies, papers, and ideas, most of which have been ignored by mainstream (sequential) computing.
Cloud computing will lead, I believe, to the re-evaluation of many of our algorithms. We may find that many of them have a built-in bias for single-processor operation. The work done in parallel computing will be pertinent to cloud computing.
Cloud computing is a very different form of computing. We're still learning about it. The application of concepts from parallel processing is one aspect of it. I won't be surprised if there are more. There may be all sorts of surprises ahead of us.
Labels:
algorithms,
cloud computing,
optimization,
parallel processing
Sunday, July 2, 2017
It's not always A or B
Us folks in IT almost pride themselves on our fierce debates on technologies. And we have so many of them: emacs vs. vim, Windows vs. Mac, Windows vs. Linux, C vs. Pascal, C# vs. Java, ... the list goes on and on.
But the battles in IT are nothing compared to the fights that were held between the two different types of electricity. In the early 20th century, Edison lead the group for direct current, and Tesla lead the alternate group for, well, alternating current. The battle between these two made our disputes look like a Sunday picnic. Edison famously electrocuted an elephant -- with the "wrong" type of electricity, of course.
I think we in IT can learn from the Great Electricity War. (And its not that we should be electrocuting elephants.)
Despite all of the animosity, despite all of the propaganda, despite all of the innovation on both sides, neither format "won". Neither vanquished its opponent. We use both types of electricity.
For power generation, transmission, and large appliances, we use alternating current. (Large appliances include washing machines, dryers, refrigerators, and vacuum cleaners.)
Small appliances (personal computers, digital televisions, calculators, cell phones) use direct current. They may plug into the AC wall outlet, but the first thing they do is convert 110 VAC into lower-voltage DC.
Alternating current has advantages in certain situations and direct current has advantages in other situations. It's not that one type of electricity is better than the other, its that one type is better for a specific application.
We have a multitude of solutions in IT: multiple operating systems, multiple programming languages, multiple editors, multiple hardware platforms... lots and lots of choices. We too often pick one of many, name it our "standard", and force entire companies to use that one selection. That may be convenient for the purchasing team, and probably for the support team, but is it the best strategy for a company?
Yes, we in IT can learn a lot from electricity. And please, respect the elephants.
But the battles in IT are nothing compared to the fights that were held between the two different types of electricity. In the early 20th century, Edison lead the group for direct current, and Tesla lead the alternate group for, well, alternating current. The battle between these two made our disputes look like a Sunday picnic. Edison famously electrocuted an elephant -- with the "wrong" type of electricity, of course.
I think we in IT can learn from the Great Electricity War. (And its not that we should be electrocuting elephants.)
Despite all of the animosity, despite all of the propaganda, despite all of the innovation on both sides, neither format "won". Neither vanquished its opponent. We use both types of electricity.
For power generation, transmission, and large appliances, we use alternating current. (Large appliances include washing machines, dryers, refrigerators, and vacuum cleaners.)
Small appliances (personal computers, digital televisions, calculators, cell phones) use direct current. They may plug into the AC wall outlet, but the first thing they do is convert 110 VAC into lower-voltage DC.
Alternating current has advantages in certain situations and direct current has advantages in other situations. It's not that one type of electricity is better than the other, its that one type is better for a specific application.
We have a multitude of solutions in IT: multiple operating systems, multiple programming languages, multiple editors, multiple hardware platforms... lots and lots of choices. We too often pick one of many, name it our "standard", and force entire companies to use that one selection. That may be convenient for the purchasing team, and probably for the support team, but is it the best strategy for a company?
Yes, we in IT can learn a lot from electricity. And please, respect the elephants.
Labels:
editor wars,
electricity,
language wars,
operating system wars
Subscribe to:
Posts (Atom)