In my last column, we talked about 3 steps for a process and how important that last step “Make it better” is. What we didn’t talk about is what we mean by better. The short answer for how do we know that a process change is “better” is “it depends”. There are many different ways you can measure or monitor a process to see if it’s better, but what is the right way.
In general there are three things you look at for seeing if a process is improved: time, quality and risk. Is your process going faster? Is your process making better quality outputs? Is your process less risky? These are all valid ways to measure a process, but you need to make sure you are measuring the right thing and at the right level.
When you look at the time aspect of improvement, where you measure is important. As we know from Eliyahu Goldratt’s novel The Goal, any improvement made somewhere besides the bottleneck is an illusion. Improving the speed of a component that doesn’t improve the overall speed of the process might feel nice, but isn’t making the process better. So you need to make sure when you are measuring improvements to speed up a process, you are looking at the overall throughput of the process, not of an individual component.
Quality is always important and I’d be reluctant to turn down a quality improvement. But when you are looking to improve quality, you need to determine if this is an improvement in quality that the user cares about. Apple improves the quality of their iPhone processors every year, but would users care if the improve the quality of the box the phone comes in? There might be a measurable quality improvement in the packaging, but if the user doesn’t notice or care, then is it really worth doing?
And let’s not forget risk. What could go wrong with reducing risk? I had a job years ago where I had to run a build twice a month on an old OS/2 server. It was a legacy process that was being replaced with a Linux version. But until the OS/2 to Linux migration was done, even turning off the OS/2 machine was risky since we only had one machine to run the build on and if anything happened to it, we had no way to easily replace it. I also did a Windows build of the same software. If anything happened to the Windows machine, I could have it replaced and ready to go in minutes. So reducing the risk of the OS/2 machine going down was extremely important, while reducing the risk of the Windows machine going down had very little utility. All risk is not risky.
And to finish off our discussion, it will rarely be as simple as reducing time, quality or risk. More likely you’ll be making tradeoffs. I can make the process faster, but that might reduce quality or increase risk. I can increase quality, but it will slow down the process. This is when you really need to have discussions with all stakeholders on what is the tradeoff of your improvements and is it really worth doing?
The biggest thing to remember about any process improvements is that just because you can improve a process, doesn’t mean that it’s worth doing. This is why I like the ideation process from Eric Ries’ The Lean Startup. Experiment with a lot of little improvements instead of focusing on a few big improvements. The more improvements you try, the more likely you are to find some that actually improve your process. And the quicker you experiment with those small improvements, the quicker you can get rid of changes that don’t actually work.