Thursday, June 01, 2006

Agile Development Ramblings: Part II
The Process
In this project I was an Architect / Developer. The rest of the development team comprised a developer / SCRUM “Master” (see below), a Project Manager, a Business Analyst, a Data Architect, and an Agile Process Consultant. As with all projects, we reported into a Programme Manager. This was the first agile project for this customer. Consequently, we were proving the process as well as the product.

The Agile methodology we used was a mixture of SCRUM and DSDM. DSDM is an iterative methodology based on the core concepts of time boxed, prioritised development.

Elements of SCRUM were used to augment the internal iteration process and structure our day to day existence: Progress was followed using a “Burn Down Chart”, “Product -” and “Sprint - Backlogs” were created, (although these used the DSDM MoSCoW methodology) – See below for more detail on all these elements.

Notes on SCRUM
All the work to be done on a SCRUM project is recorded in the “Product Backlog”. This is a list of all the desired features for the final product. At the end of each Sprint a “Sprint Review Meeting” (see below) is held during which the Ambassador Users – in our case representatives of each of the organisations which will use our system - prioritise the remaining items on this Backlog. (see below for more detail).

This prioritised feature list is then taken by the Development Team who select the features they will target to complete in the next Sprint. These features are then moved from the Product Backlog to the “Sprint Backlog”.

2 Week Sprints
In SCRUM, iterations are called “Sprints”. Ours were 2 weeks long. (The length can be varied but it is recommended that they stay the same length throughout the project.) This worked well for us in a number of ways: they were regular, so the AU's could predict when they had to travel to London, they were long enough that a real chunk of functionality could be developed and then demoed, but short enough that everything done in development had to be relevant. They meant we stayed very focused throughout the 14 weeks.

The Sprint Plan, Estimation, and the Burn Down Chart
During the Sprint the team stays on track by holding brief daily meetings. The first meeting after the Sprint Planning Meeting is slightly different. Here, the team works down the Ambassador User prioritised feature list, breaking each feature down into manageable and unitary tasks. Each of these tasks is then entered into the Sprint Plan “Sprint Backlog” sheet as an individual item (see below). In addition, meetings, demos and any other requirements on team member’s time are added. It is important that all time spent during the project is tracked on the Sprint Plan.

The Sprint plan is a very simple, yet highly effective thing. It's an excel spreadsheet. The “Sprint Backlog” page is where all the action happens. Firstly the start and end of the Sprint are entered. This is used to calculate the time available. Each member of the development team (including project managers and business analysts) then enters their details in the top right hand corner. These details include their initials (used later in task allocation), their predicted availability for the project (e.g. a full-time developer is usually 90%), and any unavailable hours (i.e. holidays) are recorded as a negative number and deducted from the total available time. This information is then used to calculate the time each member has available to contribute to the project that Sprint (the “Avail” row).

The first allocation and estimation of each task then takes place. The team as a whole works through each of the entered task items in turn. For each, an “Owner” is allocated. They then estimate the time in hours they feel it will take them to complete the task.

The final step before the initial Sprint Planning is complete is to check that no team member is allocated more hours of work than they are capable of completing. This can be seen by comparing the “Avail” and “Alloc” rows for each member. If there is am imbalance in allocations, tasks can be re allocated. If reallocation is impossible then task items for the overworked member can be taken out of scope.

Finally, the effect of all this information can be seen simply and graphically by looking at the Burn Down Chart (“Sprint Chart”) which plots total estimated work remaining versus the time left in the Sprint. In a glance this lets everyone see the current estimated time versus the available time. Both these variables has a line. If your “estimated time” line is below the “available time” line then you're very happy. Too far below, and more tasks (and therefore features) can be brought into scope so everyone is well utilised. In the nightmare scenario where you're estimating above the “available time” line you know you (estimate) you have more work than there is time to do it. In this case you generally give things a few days to see if you can catch up with the line. If this fails, the team needs to discuss and take task items out of scope.

Due to the simplicity and speed of the estimation process once the information is first entered, it is then possible to repeat this task every day – first thing in the morning is best. Each task that is not completed or is in scope has it's time to complete re-estimated. If a task has been completed since the last meeting, the estimate is set at “0”. As before, individual workloads can be checked and reallocation and de / re scoping used to ensure all workloads are manageable. As with the first session, the meeting ends with a look at the Sprint Chart and it's time lines.

This construction and updating of the Sprint Plan, especially the viewing of “the line” had the very pronounced effect of fostering a shared sense of responsibility in the team for everything everyone is doing. It was also very visible. Discussions which might have before happened in hidden Project Manager cabals are out in the open, and can be discussed by all. The whole team could not avoid the fact that things need to react to the current state of progress. This all resulted in a group who were all very focused on delivery. Not only that, but everyone was aware of what has to be delivered and what the workloads were like to reach that point. Everyone was responsible for the Sprint's success.

Is the fact that everyone has to attend a meeting a problem in itself? I'd say no. Things were always quick and relevant. Again the Sprint Plan ensured that. They were also quick. Fifteen minutes was usually all it took. Any time longer was when something of great importance to all has to be discussed.

Sprint Review Meetings
At the end of each Sprint the team demonstrated to the assembled AU’s the completed functionality of the real system, and outstanding and new features were discussed and prioritised (see below). We did this by showing the current system in all it's glory. It was my fear going into the first of these meetings that there would be nothing to show. I was wrong. What little we had done was ecstatically received. We had started to build what had been asked for, no matter how little we had actually achieved.

What's more, in the process of doing this we (the developers) had to explain our system. We asked questions to clarify our understanding in areas (which were many to begin with) of the business domain. We even got a user up to show us how they would do things on our fledgling system in their job.

It is worth noting that when I and the other developer joined the project, things had been running for a while. The Ambassador Users had been enlisted months before and had been along to a few meetings where the process had been explained to them and they had discussed and initially prioritised the planned high level features. Feedback gathered at the end showed this had left them confused and de motivated. Even when we did join the project, we did not to meet them for three weeks. These were spent setting up our development environments (1 week) and developing the first iteration (2 weeks). It transpired that show and tell was by far the best way to engage them and keep them interested.

However, over time, all the project team developed a relationship and shared sense of responsibility with the AU's. They felt engaged, and we gained an understanding of who they were and how they wanted to use the new system. This was particularly useful in our situation as we had such a diverse user base, drawn from seven different organisations, each of whom needed to use the system in a different way. In addition, we couldn't obfuscate and hide behind excuses and jargon because they were sitting in front of us. Similarly they had to think about what they wanted and if we gave them something that they asked for and it turned out that they had explained it badly, they realised the benefit of carefully thinking things through when making requests.

Feature Prioritisation with MoSCoW
“MoSCoW” is a way of prioritizing features. In the context of DSDM the MoSCoW technique is used to prioritize requirements. It is an acronym that stands for:
  • MUST have this requirement to meet the business needs.

  • SHOULD have this requirement if at all possible, but the project success does not reply on this.

  • COULD have this requirement if it does not affect the fitness of business needs of the project.

  • WOULD have this requirement at later date if there is some time left (or in the future development of the system).

These criteria were used by the Ambassador Users to prioritise the outstanding elements on the Product Backlog at the end of each Sprint Planning Session. This then informed the features which we the developers picked off for development in each Sprint.

The approach benefits from being simple. Everyone could use it and more importantly could remember the terminology. They also were aware of the ramifications of allocating the different levels of priority to their requirements. Not everything asked for was developed.

No comments: