Software Team Productivity

The following piece is from the Director of Operations at Treehouse Technology Group, JD Roger

Software team productivity is an inherently difficult thing to put metrics — at least, quantitative metrics — around. Lines of code, bug rates, etc., are not necessarily good indicators of how well or poorly your software team is doing, especially if they are working on very complex problems. At Treehouse, we work hard to use metrics that give us the best indicators possible to ensure we are making rapid progress for our clients. Some of the key metrics we monitor internally are:

  • Customer Satisfaction: The most important thing for us is that customers are happy with the work we are doing. Regular check-ins to ensure that the client feels that we are making adequate progress are crucial metrics for our team. Our process ensures that we demonstrate progress for clients at least every two weeks, and this gives us a perfect touch point with them. We are also working toward performing surveys with clients regularly to ensure satisfaction. A team that is satisfying a client is usually a productive team.
  • Peer Code Reviews: Much of the code that gets put into a project at our firm goes through a peer code review, and our CTO and other senior technical team members will also spot-check projects to ensure that code quality is being maintained. Taken together with customer satisfaction, we use these measures to ensure that we are building the right product, and building the product right.
  • Cycle Time: Once an issue is started, how long does it take to get to done? Once ad issue is sent to be tested, does it linger in that status for too long? These times can indicate problems – whether it is an engineering team that is taking too long to do things because of poorly-documented requirements, a QA team that is struggling with a lack of resources, or a project that is underprioritized – cycle time is an important measurement of progress.
  • QA Kickback Rate: Once a ticket is dev-complete, we count on our engineers to ensure that the feature works. Once they are confident, they will push the issue to our QA team for review. Kickbacks from QA to the engineering team are common, but if we see a significant number of issues (especially simple issues) being kicked back more than once, that is a leading indicator of problems with the engineering team’s effectiveness and productivity.
  • Time Logs versus Historical Data: Timelogs can provide valuable metrics toward measuring team productivity. While any one user story may go far longer or take far less time than the median, if we see a large number of stories taking longer than the average it is an indicator that the team may not be as performant as they should be. On the other hand, if a team is consistently taking less time than the median, it indicates either a highly-performant team or a team that is padding estimates. Changes in either direction warrant investigation to ensure team productivity.

At the end of the day, our goal is to be fair to our engineering team and our clients — we know that every project and every issue within a project are different, and complexities arise even when the team is working on something that should be simple. Everything we build is somehow new, and because of this is often hard to measure or predict, but we do our best to ensure that our process is as predictable, measurable, and repeatable as possible.

Ready to Transform Your Business?