Saturday, August 19, 2017

Cloud, like other forms of computers, changes over time

Cloud computing has been with us a while. In its short life, and like other types of computing, it has changed.

"Cloud" started out as the outsourcing of system administration.

Then "cloud" was about scalability, and the ability to "spin up" servers as you needed them and "spin down" servers when they were not needed.

Shortly after, "cloud" was a cost-control measure: pay for only the servers you use.

For a while, "cloud" was a new type of system architecture with dedicated servers (web, database) connected by message queues.

Then "cloud" was about microservices, which are small web services that are less than complete applications. (Connect the right microservices in the right way, and you have an application!)

Lately, "cloud" has been all about containers, and the rapid and lightweight deployment of applications.

So what is "cloud computing", really?

Well, it's all of these things. As I see it, cloud computing is a new form of computing, difference from mainframe computing, desktop computing, and web applications. As a new form of computing, it has taken us a while to fully understand it.

We had similar transitions with desktop (or PC) computing and web applications. Early desktop microcomputers (the Apple II, the TRS-80, and even the IBM PC) were small, slow, and difficult to use. Over time, we modified those PCs: powerful processors, bigger displays, more memory, simpler attachments (USB instead of serial), and better interfaces (Windows instead of DOS).

Web applications went through their own transitions, from static web pages to CGI Perl scripts to AJAX applications to new standards for HTML.

Cloud computing is undergoing a similar process. It shouldn't be a surprise; this process of gradual improvement is less about technology and more about human creativity. We're always looking for new ways of doing things.

One can argue that PCs and web applications have not stopped changing. We've just added touchscreens to desktop and laptop computers, and we've invented NoSQL databases for web applications (and mobile applications). It may be that cloud computing will continue to change, too.

It seems we're pretty good at changing things.

Sunday, August 13, 2017

Make it go faster

I've worked on a number of projects, and a (not insignificant) number of them had a requirement (or, more accurately, a request) to improve the performance of an existing system.

A computerized system consists of hardware and software. The software is a set of instructions that perform computations, and the hardware executes those instructions.

The most common method of improving performance is to use a faster computer. Leave the software unchanged, and run it on a faster processor. (Or if the system performs a lot of I/O, a faster disk, possibly an SSD instead of a spinning hard drive.) This method is simple, with no changes to the software and therefore low risk. It is the first method of reducing run time: perform computations faster.

Another traditional method is to change your algorithm. Some algorithms are faster than others, often by using more memory. This method has higher risk, as it changes the software.

Today's technology sees cloud computing as a way to reduce computing time. If your calculations are partitionable (that is, subsets can be computed independently) then you can break a large set of computations into a group of smaller computations, assign each smaller set to its own processor, and compute them in parallel. Effectively, this is computing faster, provided that the gain in parallel processing outweighs the cost of partitioning your data and combining the multiple results.

One overlooked method is using a different compiler. (I'm assuming that you're using a compiled language such as C or C++. If you are using Python or Ruby, or even Java, you may want to change languages.) Switching from one compiler to another can make a difference in performance. The code emitted by one compiler may be fine-tuned to a specific processor; the code from another compiler may be generic and intended for all processors in a family.

Switching from one processor to another may improve performance. Often, such a change requires a different compiler, so you are changing two things, not one. But a different processor may perform the computations faster.

Fred Brooks has written about essential complexity and accidental complexity. Essential complexity is necessary and unavoidable; accidental complexity can be removed from the task. There may be an equivalent in computations, with essential computations that are unavoidable and accidental computations that can be removed (or reduced). But removing accidental complexity is merely reducing the number of computations.

To improve performance you can either perform the computations faster or you can reduce the number of computations. That's the list. You can use a faster processor, you can change you algorithms, you can change from one processor and compiler to a different processor and compiler. But there is an essential number of computations, in a specific sequence. You can't exceed that limit.

Wednesday, August 2, 2017

Agile is for startups; waterfall for established projects

The Agile method was conceived as a revolt against the Waterfall method. Agile was going to be everything that Waterfall was not: lightweight, simple, free of bureaucracy, and successful. In retrospect, Agile was successful, but that doesn't mean that Waterfall is obsolete.

It turns out that Agile is good for some situations, and Waterfall is good for others.

Agile is effective when the functionality of the completed system is not known in advance. The Agile method moves forward in small steps which explore functionality, with reviews after each step. The Agile method also allows for changes in direction after each review. Agile provides flexibility.

Waterfall is effective when the functionality and the delivery date is known in advance. The Waterfall method starts with a set of requirements and executes a plan for analysis, design, coding, testing, and deployment. It commits to delivering that functionality on the delivery date, and does not allow (in its pure form) for changes in direction. Waterfall provides predictability.

Established companies, which have an ongoing business with procedures and policies, often want the predictability that Waterfall offers. They have business partners and customers and people to whom they make commitments (such as "a new feature will be ready for the new season"). They know in detail the change they want and they know precisely when they want the change to be effective. For them, the Waterfall method is appropriate.

Start-ups, on the other hand, are building their business model and are not always certain of how it will work. Many start-ups adjust their initial vision to obtain profitability. Start-ups don't have customers and business partners, or at least not with long-standing relationships and specific expectations. Start-ups expect to make changes to their plan. While they want to start offering services and products quickly (to generate income) they don't have a specific date in mind (other than the "end of runway" date when they run out of cash). Their world is very different from the world of the established company. They need the flexibility of Agile.

The rise of Agile did not mean the death of Waterfall -- despite some intentions of the instigators. Both have strengths, and each can be used effectively. (Of course, each can be mis-used, too. It is quite easy to manage a project "into the ground" with the wrong method, or a poorly applied method.)

The moral is: know what is best for your project and use the right methods.