Showing posts with label agile methods. Show all posts
Showing posts with label agile methods. Show all posts

Monday, September 4, 2017

Agile can be cheap; Waterfall is expensive

Agile can be cheap, but Waterfall will always be expensive.

Here's why:

Waterfall starts its process with an estimate. The Waterfall method uses a set of phases (analysis, design, coding, testing, and deployment) which are executed according to a fixed schedule. Many Waterfall projects assign specific times to each phase. Waterfall needs this planning because it makes a promise: deliver a set of features on a specific date.

But notice that Waterfall begins with an estimate: the features that can be implemented in a specific time frame. That estimate is crucial to the success of the project. What is necessary to obtain that estimate?

Only people with knowledge and experience can provide a meaningful estimate. (One could, foolishly, ask an inexperienced person for the estimate, but that estimate has no value.)

What knowledge does that experienced person need? Here are some ideas:
- The existing code
- The programming language and tools used
- The different teams involved in development and testing
- The procedures and techniques used to coordinate efforts
- The terms and concepts used by the business

With knowledge of these, a person can provide a reasonable estimate for the effort.

These areas of knowledge do not come easily. They can be learned only by working on the project and in different capacities.

In other words, the estimate must be provided by a senior member of the team.

In other words, the team must have at least one senior member.

Waterfall relies on team members having knowledge about the business, the code, and the development processes.

Agile, in contrast, does not rely on that experience. Agile is designed to allow inexperienced people work on the project.

Thus, Agile projects can get by without senior, experienced team members, but Waterfall projects must have at least one (and probably more) senior team members. Since senior personnel are more expensive than junior, and Waterfall requires senior personnel, we can see that Waterfall projects will, on average, cost more than Agile projects. (At least in terms of per-person costs.)

Do not take this to mean that you should run all projects with Agile methods. Waterfall may be more expensive, but it provides different value. It promises a specific set of functionality on a specific date, a promise that Agile does not make. If you need the promises of Waterfall, it may be worth the extra cost (higher wages). This is a business decision, similar to using proprietary tools over open-source tools, or leasing premium office space in the suburbs over discount office space in a not-so-nice part of town.

Which method you choose is up to you. But be aware that they are not the same, not only in terms of deliverables but in staffing requirements. Keep those differences in mind when you make your decision.

Sunday, July 31, 2016

Agile pushes ugliness out of the system

Agile differs from Waterfall in many ways. One significant way is that Agile handles ugliness, and Waterfall doesn't.

Agile starts by defining "ugliness" as an unmet requirement. It could be a new feature or a change to the current one. The Agile process sees the ugliness move through the system, from requirements to test to code to deployment. (Waterfall, in contrast, has the notion of requirements but not the concept of ugliness.)

Let's look at how Agile considers ugliness to be larger than just unmet requirements.

The first stage is an unmet requirement. With the Agile process, development occurs in a set of changes (sometimes called "sprints") with a small set of new requirements. Stakeholders may have a long list of unmet requirements, but a single sprint handles a small, manageable set of them. The "ugliness" is the fact that the system (as it is at the beginning of the sprint) does not perform them.

The second stage transforms the unmet requirements into tests. By creating a test -- an automated test -- the unmet requirement is documented and captured in a specific form. The "ugliness" has been captured and specified.

After capture, changes to code move the "ugliness" from a test to code. A developer changes the system to perform the necessary function, and in doing so changes the code. But the resulting code may be "ugly" -- it may duplicate other code, or it may be difficult to read.

The fourth stage (after unmet requirements, capture, and coding) is to remove the "ugliness" of the code. This is the "refactoring" stage, when code is improved without changing the functions it performs. Modifying the code to remove the ugliness is the last stage. After refactoring, the "ugliness" is gone.

The ability to handle "ugliness" is the unique capability of Agile methods. Waterfall has no concept of code quality. It can measure the number of defects, the number of requirements implemented, and even the number of lines of code, but it doesn't recognize the quality of the code. The quality of the code is simply its ability to deliver functionality. This means that ugly code can collect, and collect, and collect. There is nothing in Waterfall to address it.

Agile is different. Agile recognizes that code quality is important. That's the reason for the "refactor" phase. Agile transforms requirements into tests, then into ugly code, and finally into beautiful (or at least non-ugly) code. The result is requirements that are transformed into maintainable code.

Wednesday, November 11, 2015

Big changes happen early

Software has a life cycle: It is born, it grows, and finally dies. That's not news, or interesting. What is interesting is that the big changes in software happen early in its life.

Let's review some software: PC-DOS, Windows, and Visual Basic.

PC-DOS saw several versions, from 1.0 to 6.0. There were intermediate versions, such as versions 3.3 and 3.31, so there were more that six versions.

Yet the big changes happened early. The transition from 1.0 to 2.0 saw big changes in the API, allowing new device types and especially subdirectories. Moving from version 1.0 to 2.0 required almost a complete re-write of an application. Moving applications from version 2.0 to later versions required changes, but not as significant. The big changes happened early.

Windows followed a similar path. Moving from Windows 1 to Windows 2 was a big deal, as was moving from Windows 2 to Windows 3. The transition from Windows 3 to Windows NT was big, as was the change from Windows 3.1 to Windows 95, but later changes were small. The big changes happened early.

Visual Basic versions 1, 2, and 3 all saw significant changes. Visual Basic 4 had some changes but not as many, and Visual Basic 5 and 6 were milder. The big changes happened early. (The change from VB6 to VB.NET was large, but that was a change to another underlying platform.)

There are other examples, such as Microsoft Word, Internet Explorer, and Visual Studio. The effect is not limited to Microsoft. Lotus 1-2-3 followed a similar arc, as did dBase, R:Base, the Android operating system, and Linux.

Why do big changes happen early? Why do the big jumps in progress occur early in a product's life?

I have two ideas.

One possibility is that the makers and users of an application have a target in mind, a "perfect form" of the application, and each generation of the product moves closer to that ideal form. The first version is a first attempt, and successive versions improve upon previous versions. Over the life of the application, each version moves closer to the ideal.

Another possibility is that changes to an application are constrained by the size of the user population. A product with few users can see large changes; a product with many users can tolerate only minor changes.

Both of these ideas explain the effect, yet they both have problems. The former assumes that the developers (and the users) know the ideal form and can move towards it, albeit in imperfect steps (because one never arrives at the perfect form). My experience in software development allows me to state that most development teams (if not all) are not aware of the ideal form of an application. They may think that the first version, or the current version, or the next version is the "perfect" one, but they rarely have a vision of some far-off version that is ideal.

The latter has the problem of evidence. While many applications grow there user base over time and also shrink their changes over time, not all do. Two examples are Facebook and Twitter. Both have grown (to large user bases) and both have seen significant changes.

A third possibility, one that seems less theoretical and more mundane, is that as an application grows, and its code base grows, it is harder to make changes. A small version 1 application can be changed a lot for version 2. A large version 10 application has oodles of code and oodles of connected bits of code; changing any bit can cause lots of things to break. In that situation, each change must be reviewed carefully and tested thoroughly, and those efforts take time. Thus, the older the application, the larger the code base and the slower the changes.

That may explain the effect.

Some teams go to great lengths to keep their code well-organized, which allows for easier changes. Development teams that use Agile methods will re-factor code when it becomes "messy" and reduce the couplings between components. Cleaner code allows for bigger and faster changes.

If changes are constrained not by large code but by messy code, then as more development teams use Agile methods (and keep their code clean) we will see more products with large changes not only early in the product's life but through the product's life.

Let's see what happens with cloud-based applications. These are distributed by nature, so there is already an incentive for smaller, independent modules. Cloud computing is also younger than Agile development, so all cloud-based systems could have been (I stress the "could") developed with Agile methods. It is likely that some were not, but it is also likely that many were -- more than desktop applications or web applications.

Monday, June 1, 2015

Waterfall or Agile -- or both

A lot has been written (and argued) about waterfall methods and agile methods. Each has advocates. Each has detractors. I take a different path: they are two techniques for managing projects, and you may want to use both.

Waterfall, the traditional method of analysis, design, coding, testing, and deployment, makes the promise of a specific deliverable at a specific time (and at a specific cost). Agile, the "young upstart" makes the promise of frequent deliverables that function -- although only with what has been coded, not necessarily everything you may want.

Waterfall and agile operate in different ways and work in different situations. Waterfall works well when you and your team know the technology, the tools, the existing code, and the business rules. Agile works when you and your team are exploring new areas (technology, code, business rules, or a combination). Agile provides the flexibility to change direction quickly, where waterfall locks you in to a plan.

Waterfall does not work when there are unknowns. A new technology, for example. Or a new team looking at an existing code base. Or perhaps significant changes to business rules (where "significant" is smaller than you think.) Waterfall's approach to define everything up front cannot handle the uncertainties, and its schedules are likely to fail.

If your shop has been building web applications and you decide to switch to mobile apps, you have a lot of uncertainties. New technologies, new designs for applications, and changes to existing web services are required. You may be unable to list all of the tasks for the project, much less assign reasonable estimates for resources and time. If your inputs are uncertain, how can the resulting plan be anything but uncertain?

In that situation, it is better to use agile methods to learn the new technologies. Complete some small projects, perhaps for internal use, that use the tools for mobile development. Gain experience with them. Learn the hazards and understand the risks.

When you have experience, use waterfall to plan your projects. With experience behind you, your estimates will be better.

You don't have to use waterfall or agile exclusively. Some projects (perhaps many) require some exploration and research. That surveying is best done with agile methods. Once the knowledge is learned, once the team is familiar with the technology and the code, a waterfall project makes good business sense. (You have to deliver on time, don't you?)

As a manager, you have two tools to plan and manage projects. Use them effectively.

Tuesday, December 30, 2014

Agile is not for everyone

Agile development is a hot topic for many project managers. It is the latest management fad, following total cost of ownership (TCO) and total quality management (TQM). (Remember those?)

So here is a little bit of heresy: Agile development methods are not for every project.

To use agile methods effectively, one must understand what they offer -- and what they don't.

Agile methods make one guarantee: that your system will always work, for the functionality that has been built. Agile methods (stakeholder involvement, short development cycles, automated testing) ensures that the features you build will work as you expect. Even as you add new features, your automated tests ensure that old features still work. Thus, you can always send the most recent version of your system to your customers.

But agile methods don't make very promise. Specifically, they don't promise that all of your desired features will be available (and working) on a specific date. You may have a list of one hundred features; agile lets you implement those features in a desired order but does not guarantee that they will all be completed by an arbitrary date. The guarantee is only that the features implemented by that date will work. (And you cannot demand that all features be implemented by that date -- that's not agile development.)

Agile does let you make projections about progress, once you have experience with a team, the technology, and a set of features for a system. But these projections must be based on experience, not on "gut feel". Also, the projections are just that: projections. They are estimates and not guarantees.

Certain businesses want to commit to specific features on specific dates, perhaps to deliver a system to a customer. If that is your business, that you should look carefully at agile methods and understand what they can provide. It may be that the older "waterfall" methods, which do promise a specific set of features on a specific date, are a better match.

Thursday, November 13, 2014

Cloud and agile change the rules

The history of computer programming is full of attempts to ensure success, or more specifically, to avoid failure. The waterfall method of separating analysis, design, and coding (with reviews after each step) is one such technique. Change reviews are another. System testing is another. Configuration management (especially for production systems) is another.

It strikes me that cloud computing and agile development techniques are yet more methods in our quest to avoid failure. But they change the rules from previous efforts.

Cloud computing tolerates failures of equipment. Agile development guards against failures in programming.

Cloud computing uses multiple instances of servers. It also uses stateless transactions, so any server can handle any request. (Well, any web server can handle any web request, and any database server can handle any database request.) If a server fails, another server (of the same type) can pick up the work.

Cloud computing cannot, however, handle a failure in code. If I write a request handler and get the logic wrong, then each instance of the handler will fail.

Agile development handles the code failures. Agile development ensures that the code is always correct. With automated tests and small changes, the code can grow and programmers (and managers) can know that the added features are correct.

These two techniques (cloud and agile) let us examine some of the strategies we have used to ensure success.

For hardware, we had long product life cycles. We selected products that were known to be reliable. For mainframes and early PCs, this was IBM. (For later PCs it was Compaq.) The premium brands commanded premium prices, because we valued the reliability of the equipment and the vendor. And since the equipment was expensive, we planned to use it for a long time.

For software, we practiced "defensive coding" and had each function check its inputs for invalid values or combinations of values. We held code reviews. We made the smallest changes possible, to reduce risk. We avoided large changes that would improve the readability of the code because we could not be sure that the revised code would work as expected in all cases.

In light of cloud computing's cheap hardware and agile development's pair programming and automated testing, these strategies may no longer be the best practice. Our servers are virtual, and while we want the underlying "big iron" to be reliable and long-lived, the servers themselves may have short lives. If that is the case, the "standard" server configuration may change over time, more frequently than we changed our classic, non-virtual servers.

The automated testing of agile development changes our approach to program development. Before comprehensive automated testing, minimal changes were prudent, as we could not know that a change would have an unintended effect. A full set of automated tests provides complete coverage of a program's functionality, so we can be bolder in our changes to a program. Re-factoring a small section of code (or a large section) is possible; our tests will verify that we have introduced no defects.

Cloud computing and agile development change the rules. Be aware of the changes and change your procedures to keep up.

Monday, August 11, 2014

Agile is not compatible with silos

Agile development methods are very different from the traditional Waterfall methods. So different that they can affect the culture of the organization.

Agile make a different promise than Waterfall. Waterfall promises a specific deliverable on a specific date; Agile promises that you can ship whenever you want.

Agile discourages specialization. An iteration is short yet requires analysis, development, and testing. Such a short cycle does not allow for different individuals to perform different tasks.

Yet the biggest difference between Agile and Waterfall is the partitioning of tasks and the encapsulation of information. Waterfall strives for clean, discrete changes from one phase to another, with information flowing between phases in well-defined documents. The flow between the requirements phase and the development phase is the requirements document (or documents). The test results are presented in a specific document. And so on.

Information in each phase is encapsulated in that phase, and only a small set of information is allowed to transfer (one might say 'leak') to another phase.

The partitioning of tasks and the encapsulation of information leads to silos within the organization. Once separate teams are established for requirements, development, testing, and deployment, tensions arise between teams. The testing team identifies defects that reflect on the development team. The development team blames the requirements team for incomplete or ambiguous specifications.

Agile -- at least Agile for small teams -- has none of that. The fast cycles of feature selection, design, development, and test provide immediate feedback. An ambiguous requirement is spotted early, and it is obvious to everyone. Defects are identified and fixed before implementing the next feature set.

More importantly, an Agile project has one team, and the measurement of success for that team is the delivery of software. That focus on success and the inability to shift blame to another team means that it is harder to establish silos.

Which is not to say that Agile will eliminate all silos. An organization with many Agile projects can still have silos. A large company using an "Agile for large companies" process may develop silos.

But for the most part, I believe Agile processes are incompatible with silos. The involvement of necessary stakeholders; the coordinated work of design, development, and testing; and the fast cycle times all push against silo-ization.

Wednesday, November 20, 2013

We need a new UML

The Object Management Group has released a new version of UML. The web site for Dr. Dobb's asks the question: Do You Even Care? It's a proper question.

It's proper because UML, despite a spike of interest in the late 1990s, has failed to move into the mainstream of software development. While the Dr. Dobb's article claims ubiquity ("dozens of UML books published, thousands of articles and blogs posted, and thousands of training classes delivered"), UML is anything but ubiquitous. If anything, UML has been ignored in the latest trends of software: agile development techniques and functional programming. It is designed for large projects and large teams designing the system up front and implementing it according to detailed documents. It is designed for systems built with mutable objects, and functional programming avoids both objects and mutable state.

UML was built to help us design and build large complex systems. It was meant to abstract away details and let us focus on the structure, using a standard notation that could be recognized and understood by all practitioners. We still need those things -- but UML doesn't work for a lot of projects. We need a new UML, one that can work with smaller projects, agile projects, and functional programming languages.

Monday, October 28, 2013

The Cult of Accountability

The disappointing performance of the medical insurance exchange web site (the "Obamacare web site") show the dark side of current project management techniques. After an initial specification, a long wait while multiple teams worked on various parts of the system, and a hasty integration, the web site has numerous, significant problems. Now we have calls from a group I call the "Cult of Accountability". They (the cult) want to know "who is responsible" for the failure.

Big projects often work this way: A large project is assigned to a team (for government projects, the "prime contractor") along with specifications of the deliverable. That team breaks the project into smaller components and assigns components to teams, internal or external (the "sub-contractors") along with specifications for those components. When the work is complete, the work moves in the reverse direction, with the bottom layer of teams providing their components to the next higher layer, those teams assembling the components and providing the results to the next higher layer, until the top team assembles components into a finished product.

This cycle of functional decomposition and specification continues for some number of cycles. Notice that each team starts with a specification, divides the work into smaller pieces, and provides specifications to the down-stream teams.

The top-down design and project planning for many projects is a process that defines tasks, assigns resources, and specifies delivery dates up front. It locks in a deliverable of a specified functionality, a particular design, and a desired level of quality, all on an agreed date. It defines the components and assigns responsibility for each component.

The "divide and conquer" strategy works... if the top team knows everything about the desired deliverable and can divide the work into sensible components, and if the down-stream teams know everything about their particular piece. This is the case for work that has already been done, or work that is very similar to previous work. The assembly of automobiles, for example: each car is a "product" and can be assembled by following well-defined tasks. The work can be divided among multiple teams, some external to the company. The specifications for each part, each assembly, each component, are known and understood.

The "divide and conquer" strategy works poorly for projects that are not similar to previous work. Projects in "unexplored territory" contain a large number of "unknowns". Some are "known unknowns" (we know that we need to test the performance of our database with the expected level of transactions) and some are "unknown unknowns" (we didn't realize that our network bandwidth was insufficient until we went to production). "Unknowns" is another word for "surprises".

In project management, surprises are (usually) bad. You want to avoid them. You can investigate issues and resolve questions, if you know about them. (These are the "known unknowns".) But you cannot (by definition) plan for the "unknown unknowns". If you plan for them, they become "known unknowns".

Project planning must include an evaluation of unknowns, and project process must account for them. Projects with few unknowns can be run with "divide and conquer" (or "waterfall") methods. These projects have few latent surprises.

Projects with many unknowns should be managed with agile techniques. These techniques are better at exploring, performing work in small steps and using the experience from one step to guide later steps. They don't provide a specific date for delivery of all features; they provide a constantly working product with features added over time. They avoid the "big bang" at the end of a long development effort. You exchange certainty of feature set for certainty of quality.

The Cult of Accountability will never accept agile methods. They must have agreements, specific and detailed agreements, up front. In a sense, they are planning to fail -- you need the agreements only when something doesn't work and you need a "fall guy". With agile methods, your deliverable always works, so there is no "accountability hunt". There is no need for a fall guy.

Wednesday, October 23, 2013

Healthcare.gov was a "moonshot", but the Moon mission was not

Various folks have referred to the recent project to build and launch the healthcare.gov web site as a "moonshot". They are using the term to describe a project that:

  • is ambitious in scope
  • has a large number of participants
  • occurs in a short and fixed time frame
  • consists of a single attempt that will either succeed or fail

We in IT seem to thrive on "moonshot" type projects.

But I will observe that the NASA Moon project (the Mercury, Gemini, and Apollo missions) was not a "moonshot". NASA ran the project more like an agile project than the typical waterfall project.

Let's compare.

The NASA Moon project was ambitious. One could even call it audacious.

The NASA Moon project involved a (relatively) large number of participants, including rocket scientists, metallurgists, electrical engineers, chemists, psychologists, biologists, and radio specialists. (And many more.)

The NASA Moon project had a fixed schedule of "by the end of the decade" assigned by President Kennedy in 1961.

The NASA Moon project consisted of a number of phases, each with specific goals and each with subprojects. The Mercury flights established the technology and skills to orbit the Earth. The Gemini missions built on Mercury to dock two vehicles in space. The Apollo missions used that experience to reach the Moon.

It's this last aspect that is very different from the healthcare.gov web site project (and also very different from many IT projects). The NASA Moon program was a series of projects, each feeding into the next. NASA started with a high-level goal and worked its way to that goal. They did not start with a "master project plan" that defined every task and intermediate deliverable. They learned as they went and made plans -- sensible plans, based on their newly-won experience -- for later flights.

The healthcare.gov web site is an ambitious project. It's launch has been difficult, and shows many defects. Could it have been built in an agile manner? Would an agile approach given us a better result?

The web site must perform several major tasks: authenticate users, verify income against government databases, and display valid plans offered by insurance companies. An agile approach would have built the web site in phases. Perhaps the first phase could be allowing people to register and create their profile, the second verifying income, and the third matching users with insurance plans. But such a "phased" release might have been received poorly ("what good is a web site that lets you register but do nothing else?") and perhaps not completed in time.

I don't know that agile methods would have made for better results at healthcare.gov.

But I do know that the Moon project was not a "moonshot".

Monday, October 14, 2013

Executables, source code, and automated tests

People who use computers tend to think of the programs as the "real" software.

Programmers tend to have a different view. They think of the source code as the "real" software. After all, they can always create a new executable from the source code. The generative property of source code gives it priority of the mere performant property of executable code.

But that logic leads to an interesting conclusion. If source code is superior to executable code because the former can generate the latter, then how do we consider tests, especially automated tests?

Automated tests can be used to "generate" source code. One does not use tests to generate source code in the same, automated manner that a compiler converts source code to an executable, but the process is similar. Given a set of tests, a framework in which to run the tests, and the ability to write source code (and compile it for testing), one can create the source code that produces a program that conforms to the tests.

That was a bit of a circuitous route. Here's the concept in a diagram:


     automated tests --> source code --> executable code


This idea has been used in a number of development techniques. There is test-driven development (TDD), extreme programming (XP), and agile methods. All use the concept of "test first, then code" in which tests (automated tests) are defined first and only then is code changed to conform to the tests.

The advantage of "test first" is that you have tests for all of your code. You are not allowed to write code "because we may need it someday". You either have a test (in which case you write code) or you don't (in which case you don't write code).

A project that follows the "test first" method has tests for all features. If the source code is lost, one can re-create it from the tests. Granted, it might take some time -- this is not a simple re-compile operation. A complex system will have thousands of tests, perhaps hundreds of thousands. Writing code to conform to all of those tests is a manual operation.

But it is possible.

A harder task is going in the other direction, that is, writing tests from the source code. It is too easy to omit cases, to skip functionality, to misunderstand the code. Given the choice, I would prefer to start with tests and write code.

Therefore, I argue that the tests are the true "source" of the system, and the entity we consider "source code" is a derived entity. If I were facing a catastrophe and had to pick one (and only one) of the tests, the source code, and the executable code, I would pick the tests -- provided that they were automated and complete.