Codermetrics.org

Sharing information on codermetrics

Improving the Temperature Metric

leave a comment »

In the book Codermetrics, I introduced a metric called Temperature to measure how ‘hot’ or ‘cold’ a coder is at any given time, which might be useful to know when you are planning new work or analyzing team performance.  This is an idea borrowed from Bill James who introduced the idea of Temperature as a metric for measuring hot and cold streaks in baseball (you can read an overview of the baseball version here or watch a video of Bill James explaining it here).  The formula that I used in the book sets the starting Temperature for any developer at 72 degrees (“room temperature” also borrowed from Bill James), and then moves the Temperature up or down based on the percentage improvement in “Points” accumulated by the individual in each development iteration.  So the formula looks like:

Current Temperature = Previous Temperature * (Points in The Most Recent Iteration / Points in The Prior Iteration)

A reader, Alex Mera, recently pointed out that this formula is flawed in that it is entirely relative to each individual and that significant differences in early iterations make Temperature ratings difficult to compare.  As Alex correctly stated, for example, “scoring low on the first iteration will raise your temperature on every subsequent iteration.”  While the formula can show you the trend for each developer, two people performing similarly might have two very different Temperatures (based on the results of much earlier performance) and two people with the same Temperature might actually have very different recent performance.

Take for example two people, Coder A and Coder B, who have the following Points over twelve development iterations (you can assume for this example that Points are assigned to a completed task based on the complexity of each task):

In the example data, the Points per iteration are not that different except in a few places.  In the first and second iteration, each coder has one iteration where the Points are 16, for Coder A it’s the first iteration and for Coder B it’s the second.  The other noticeable difference is a rise in Coder B’s Points in later iterations.

If you use the Temperature metric as calculated in the book for this data, then the results look like this:

Although the Points were similar, the Temperatures are very different.  This is because the “baseline” for each coder’s “room temperature” is the number of Points in their first iteration, which for Coder A was much lower than Coder B, resulting in much higher Temperatures overall for Coder A.  In later iterations, when Coder B is clearly “hotter” than Coder A, Coder B’s Temperature is still lower.  You can see the trends, and you could say that both Coder A and Coder B are “hot” when their Temperatures are for example above 90 degrees, but the difference highlights the kind of problem that Alex noted.

So what can you do to change and improve this?  One technique would be to set the baseline “room temperature” in a different way.  For example, if you knew that 24 Points was the average for a developer in each iteration, you could use 24 Points as room temperature, and compare the Points in every iteration to that.  You might get this baseline by taking the average for all the individuals on the team (maybe just for recent iterations or maybe over a longer period) or you might only use the average for each individual (in other words, compare each individual to their own average).  While there are a number of variations you could use, each with different benefits, the general approach can be described with the following formula:

Current Temperature = Room Temperature * (Points in the Most Recent Iteration / Points for Room Temperature)

If you set room temperature to 72 and the points for room temperature to 24 for both coders, then using the example data above you get the following results:

This appears to be a much “fairer” way to calculate Temperature, and it provides a good way to compare individuals since all Temperatures are relative to the same baseline.  The variance based on the differences in the initial iterations is gone.  Also, using this type of approach you are better able to evaluate what 80 degrees means versus 70 degrees or 90 degrees.  On the negative side, however, you’ll notice that the graph of Temperature looks pretty much exactly the same as the graph of Points above.  All you’ve really done is translate Points into a different metric, which may provide a different way to analyze Points, but the Temperature rises and dips exactly as the Points do.

So another improvement to consider would be to look at a group of recent iterations together, as opposed to one at a time.  Since Temperature is meant to measure “hot” and “cold” it makes sense that it should focus on trends and not just on one period.  To do this, you can use moving averages, which would modify the formula to the following:

Current Temperature = Room Temperature * (Moving Average of Points in Recent Iterations / Points for Room Temperature)

If you set room temperature to 72 and the points for room temperature to 24, and then you calculate the moving average over the three most recent iterations at every point, your results will look like this:

This approach, using a common baseline for room temperature, and making use of moving averages, probably gives you the best result and the best response to Alex’s concern.  Temperature is more comparable this way, and less subject to isolated bursts or dips.

Other improvements that could be considered:

  • Rather than just using Points, calculate Temperature from a combination of other metrics, taking into account other positive contributions (like helping others) to increase an individual’s Temperature and negative outcomes (like attributable production bugs) to decrease Temperature; this, however, would require a more complex formula for increasing and decreasing Temperature, so while I think this idea has merit due to the increased complexity I decided not to delve into the details in the book or here (I may do that at a later time but my main interest so far has been to share the idea of Temperature and “hot” and “cold” streaks for software engineers)
  • Rather than only calculating the moving average of recent iterations, you could have multiple moving averages with different weights; for example, you could take the moving average of the three most recent iterations and weight that as 75% and then take the moving average of the three prior iterations and weight that as 25%, giving you a longer trend to feed into the Temperature formula

For what it’s worth, all of the possible improvements mentioned in this article are in line with the techniques used by Bill James in his baseball Temperature metric.  Other, more detailed tweaks could be identified, too.  But as usual I suggest starting with a simple approach that’s understandable and explainable for you and your team, knowing that you can add more complex metrics later.

Written by Jonathan Alexander

April 23, 2012 at 5:12 pm

Posted in In The Field

Look Out for High Variance

leave a comment »

We’re at the start of another major league baseball season.  Hope springs eternal for every team, of course some to a greater or lesser extent.  Here’s a simple question:  why are some fans more hopeful or confident of their team’s chance of success, while others are less hopeful or downright pessimistic?  The answer has to do with a simple concept we can call Variance, and it’s a useful concept to consider in the analysis of any team, including software teams.

The concept of Variance is tied directly to track record and predictability.  An individual (or team) with a consistent track record as measured for a specific area or skill over time, and for whom factors have not changed significantly, can be categorized as Low Variance.  A Low Variance individual or team, therefore, is more predictable, meaning that their consistent track record is more likely to be repeated in the “near-term” future.  Alternatively, an individual (or team) with an inconsistent or insufficient track record, or for whom critical factors related to performance have changed significantly, can be categorized as High Variance.  A High Variance individual or team is less predictable, meaning that there is a greater likelihood that they will do much better or much worse than might be guessed from what was seen before.  In other words, High Variance means that there is a much wider range of outcomes, good or bad, that have a higher probability to occur.

A simple way to categorize whether an individual or team is High or Low Variance would be to assign subjective percentages to how likely they are to perform much better, similar to, or much worse than before.  For example, a Low Variance individual or team might be projected as:

  • 15% Likely to Perform Significantly Better Than Recent History
  • 70% Likely to Perform Similar To Recent History
  • 15% Likely to Perform Significantly Worse Than Recent History

Alternatively, a High Variance individual or team might be projected as something like this:

  • 30% Likely to Perform Significantly Better Than Recent History
  • 40% Likely to Perform Similar To Recent History
  • 30% Likely to Perform Significantly Worse Than Recent History

In the Low Variance example, the total probability of variance from recent history is placed at 30%, while in the High Variance example the variance probability is 60%.  By definition, High Variance has a higher chance of “upside” but a higher chance of “downside” too.

So what does all this have to do with baseball or software teams?  The answer is that while having more High Variance individuals may make you feel more hopeful about your team’s upside (hello Royals and Astros fans) the probability of multiple “risks” paying off is low.  Having more Low Variance individuals (hello Yankees and Red Sox fans) is a much better prescription for repeatable success.

To illustrate, suppose there are two teams with twelve individuals each.  These teams are made up of veterans with consistent track records, some high performing and some medium performing, and “unproven” team members who don’t have a significant track record and who lack backgrounds that would make you highly confident in success.  Let’s call the first team “Sky’s-The-Limit” because they have a lot of unproven individuals who they are hoping will be a big success.  Let’s call the second team “Steady-As-She-Goes” because they have a lot of medium performing veterans.  The team make-ups are:

  • Sky’s-The-Limit team members:  1 high performing, , 2 medium performing, 9 high potential
  • Steady-As-She-Goes team members:  1 high performing, 8 medium performing, 3 high potential

The question is:  which team is likely to have better results?

Assume that 1-in-3 of the High Variance individuals becomes a high performer, 1-in-3 becomes a medium performer, and 1-in-3 becomes a low performer.  Then the results would be:

  • Sky’s-The-Limit results:  4 high performing, 5 medium performing, 3 low performing
  • Steady-As-She-Goes results:  2 high performing, 9 medium performing, 1 low performing

One view of these results would be that “the two teams perform exactly the same because the high-performing individuals make up for the low-performing ones.”  But the reality on many teams is that the problems and issues related to low-performing individuals is disproportionate.  It’s the “weak links in the fence” theory, essentially that teams with the greater number of low performers are dragged down by those individuals.  In this view, the Steady-As-She-Goes team is stronger, in that it has a significantly smaller percentage of low performers (8% vs. 25%).

To the extent you can abide and succeed with a certain number of low performers, maybe because you make up for it in numbers or in enough known high performers, then having a bunch of High Variance individuals who could end up as low performers might not be a concern for you.  Maybe it could even be a known part of your team-building and success-building strategy, like the venture capitalist strategy of betting on a bunch of long-shots in hopes that one or a few will pay off big.  But, in general, look out for pinning your team’s hopes on too many High Variance individuals.  A set of steady performers are an important part of most successful teams, while teams that take too many personnel risks usually don’t get positive results.

To read how some High Variance players might factor into the chances of winning for your favorite baseball team, you can check out Jonah Keri’s article on the subject at Grantland.com.  If you do have a rooting interest, here’s hoping that your baseball team has a great season.

Written by Jonathan Alexander

April 11, 2012 at 4:38 pm

Posted in New Ideas

Part 2 Article on Moneyball

leave a comment »

Today O’Reilly Radar published my follow-up article Moneyball for software engineering (Part 2).  While the previous article introduced concepts of how Moneyball-like metrics can be applied to improve software development teams, this article focuses more on the process and techniques to get started with metrics.

Written by Jonathan Alexander

January 30, 2012 at 9:19 am

Posted in Publications