A recent study by Rally measured performance across thousands of projects. Performance was defined as the combination of Responsiveness, Quality, Productivity and Predictability.
- Responsiveness: time-to-market or otherwise how long an item spends in progress
- Quality: defect density or number of defects divided by man days
- Productivity: number of user stories completed per team size
- Predictability: measured throughput variability over time (e.g., how consistent was the team’s velocity, etc.)
There were a number of really interesting findings from their study that can be applied to how your teams are organized to maximize performance.
- Teams consisting of dedicated resources are twice as productive as those that are split across multiple teams / projects. Quality was also better for dedicated teams. Their study also found that most teams are comprised of dedicated resources.
- Stable teams (team members stay the same one quarter to the next) were correlated with 60% better productivity, 40% better predictability and 60% better responsiveness. Their study found that most teams are not very stable with an average churn of 25% (1 out of every 4 team members changing each quarter).
- The approach used for estimation had a huge impact on quality(number of defects). Teams that estimated user stories (story points) and then broke down and estimated the tasks for each story had 250% better quality than teams that did no estimates at all. It also had higher quality than teams that followed a “light-weight scrum” approach of only estimating stories (no task breakdown), but the light-weight scrum approach yielded better productivity and responsiveness.
- Finding the right balance on managing work in progress (WIP) is important. Teams that had the least WIP per team member had (everyone focused on only 1 task at a time) had the highest quality and best responsiveness (time-to-market for an individual story), but had lower productivity than the teams that allowed multiple tasks to be queued per person. I have always found that allowing up to 3 tasks per person works well. It controls the overall WIP of the team, but doesn’t create scenarios where a single “blocker” can crush productivity.
- Team size didn’t matter a lot, but teams of 5 to 9 people had the most balanced performance. I always say “less than 10,” but it’s not a hard rule. If it makes sense (to have proper cross-functional skills, because you only have 1 lead, etc.) to have an 11 or 12 person team, do it.
- Two-week iterations performed better than one, three or four. I think the best choice varies by team/organization. Teams that have solid test-build-deploy automation and dedicated and sufficient QA resources can benefit from shorter iterations far more than teams that have significant and time-intensive regression testing activities to run each sprint. Understanding how much “iteration overhead” you have is a big driver towards determining the appropriate iteration or sprint duration will be best.
- Teams that are committed to regular retrospectives had 24% better responsiveness and 42% better quality. Makes sense – teams that focus on getting better sprint after sprint will be more efficient and identify root cause of quality issues.
My summary of their findings is that teams that are more disciplined about their software development and agile practices perform better (e.g., focus resources, plan and breakdown your work, manage WIP, learn from each sprint, etc.).