{"id":6882,"date":"2017-10-26T13:15:52","date_gmt":"2017-10-26T15:15:52","guid":{"rendered":"http:\/\/blog.plataformatec.com.br\/?p=6882"},"modified":"2018-10-18T20:19:16","modified_gmt":"2018-10-18T23:19:16","slug":"12-common-mistakes-when-using-process-metrics","status":"publish","type":"post","link":"https:\/\/blog.plataformatec.com.br\/2017\/10\/12-common-mistakes-when-using-process-metrics\/","title":{"rendered":"12 Common mistakes when using Process Metrics"},"content":{"rendered":"
We’ve been advocating in favor of using metrics for a while now, and we have built a lot of content about them<\/a>. However, we have seen teams that are actively using metrics and not having the desired results.<\/p>\n Here I compile the most common mistakes that teams are committing when using metrics, so you’ll know what not to do when adopting them.<\/p>\n If you collect metrics, you need<\/strong> to use them somehow. Therefore, one of the most common mistakes is doing nothing about the metrics you are seeing. Metrics can give you process improvement insights and, to react, you need to understand how they work and what they mean.<\/p>\n For more information on how to use them, check these blog posts:<\/p>\n Another mistake is to use metrics without enough data to support them. At the beginning of a project, or when you just started using metrics, we are often anxious to start gathering results from them. However, with a low amount of data, metrics aren’t that trustworthy. A good hint is seeing if they are still changing over time or if they are steadier… When the latter happens, you are probably ready to start analyzing them and act based on results. More on how to act can be found on our metrics blog posts<\/a>.<\/p>\n Another hint would be to try daily metrics, check our blog post on them: Pros and cons of using daily metrics<\/a><\/p>\n This is a very common mistake, not only in software development but with every statistically built conclusion. Numbers alone are used just for algebra or calculus. You need the context in which they are inserted to understand them. With that in mind, to say if your throughput is healthy or not, to understand if your lead time variance is big or not, look. at. the. context.<\/p>\n This problem is related to number 3. It doesn’t make any sense to say something like “team A is doing better than team B because they have a greater throughput”. What if team A has 10 people and team B has 3? What if team A is working in an easy website development while team B is working with complex deep learning stuff? Be careful.<\/p>\n This is a micromanaging problem. People, when trying to improve as much as possible the process, end up measuring individual metrics. This problem is related to #3 and #4. You cannot compare different people. Just don’t. If you do it, you’ll be harming your team’s environment and will even decrease their productivity. Moreover, the process’ metrics should be used to understand the health of the process<\/strong>.<\/p>\n Still related to context, but now the numbers context. If you tell other people that a team delivers on average 4 stories a week, you are saying nothing. Maybe its delivery data is {0, 0, 0, 0, 0, 24}, maybe is {4, 4, 4, 4, 4, 4}. So, be sure that you are not simplifying your metrics reading. Make sure the whole toolset builds the image that you are trying to draw.<\/p>\n Related to the error before, sometimes we see people having to report their metrics to higher management showing only “good numbers”. So, if your throughput increases, you only show your throughput increase. If your backlog decreases, you just show that. Be transparent. Otherwise, they will be living a lie and, when the truth comes up, you won’t be able to explain yourself.<\/p>\n Another related problem is using metrics with only the data you want. Cropping your data to get only relevant results, like in a 10-month project consider only the last 2 months, may make sense since the context of the project changes with time and you wanna make sure you are getting the right context in your results. However, be very careful when doing that because you may actually hide a lot of useful information from yourself.<\/p>\n This is by far the most common thing I see. People are afraid of showing bottlenecks or problems and end up masking their results. People group stories into one to diminish their backlog, break stories into more to increase their throughput or even change story points to keep their velocity. With that, you are only fooling yourself and not improving your process.<\/p>\n This is a controversial topic. Metrics weren’t made to be goals. They were created to help the team understand the process and try to improve its flow. Metrics-related goals may put too much pressure on the team and have the opposite effect on them, making one metric reach the desired goal but affecting others. Therefore, before defining a metric-based goal, consider if it needs to be that “low-level” and, if it does, confirm with the team if it is actually possible to reach that goal or if there’s some process inherent impediment to it.<\/p>\n Lead time (LT) is a metric used to understand how long an item takes to pass through your process. So, there are two different questions you can ask the data:<\/p>\n The second is often much more important to a team than the first. Having an LT distribution of {0, 5, 2, 8} is usually worse than having one like {4, 4, 4, 4}. That because, in the first, your data is not reliable enough to make predictions, while in the second you have a better probability of getting it right. However, don’t forget to look at your context.<\/p>\n Being efficacious is doing the right thing. Throughput is not important if you are not doing the right thing. So, focus on your product, on what is best for it, before caring about how fast you will deliver it.<\/p>\n What do you think of these mistakes? Have you ever made one of them? Leave your comments below!<\/p>\n We’ve been advocating in favor of using metrics for a while now, and we have built a lot of content about them. However, we have seen teams that are actively using metrics and not having the desired results. Here I compile the most common mistakes that teams are committing when using metrics, so you’ll know … \u00bb<\/a><\/p>\n","protected":false},"author":47,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"ngg_post_thumbnail":0,"footnotes":""},"categories":[1],"tags":[123,75,210],"aioseo_notices":[],"jetpack_sharing_enabled":true,"jetpack_featured_media_url":"","_links":{"self":[{"href":"https:\/\/blog.plataformatec.com.br\/wp-json\/wp\/v2\/posts\/6882"}],"collection":[{"href":"https:\/\/blog.plataformatec.com.br\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.plataformatec.com.br\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.plataformatec.com.br\/wp-json\/wp\/v2\/users\/47"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.plataformatec.com.br\/wp-json\/wp\/v2\/comments?post=6882"}],"version-history":[{"count":9,"href":"https:\/\/blog.plataformatec.com.br\/wp-json\/wp\/v2\/posts\/6882\/revisions"}],"predecessor-version":[{"id":7864,"href":"https:\/\/blog.plataformatec.com.br\/wp-json\/wp\/v2\/posts\/6882\/revisions\/7864"}],"wp:attachment":[{"href":"https:\/\/blog.plataformatec.com.br\/wp-json\/wp\/v2\/media?parent=6882"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.plataformatec.com.br\/wp-json\/wp\/v2\/categories?post=6882"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.plataformatec.com.br\/wp-json\/wp\/v2\/tags?post=6882"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}#1 – Being reactionless<\/h2>\n
\n
#2 – Using metrics with few data<\/h2>\n
#3 – Using metrics without considering their context<\/h2>\n
#4 – Comparing metrics between teams<\/h2>\n
#5 – Having individual metrics<\/h2>\n
#6 – Simplifying metrics reading too much<\/h2>\n
#7 – Using only the metrics you want<\/h2>\n
#8 – Using only the data you want<\/h2>\n
#9 – “Cooking” metrics<\/h2>\n
#10 – Having metrics-based goals<\/h2>\n
#11 – Trying to decrease lead time at all costs<\/h2>\n
\n
#12 – Trying to increase throughput instead of being more efficacious<\/h2>\n