Fail Fast First
In the past few years, I’ve learned how to be more efficient with my time by adopting a fail fast first strategy for new ideas. This purpose of this strategy is not to fail. The purpose is to fail quickly if you are going to fail eventually. For example, if you are going to race a car, you’d rather find out that the car engine will blow before you start the race than in the middle of it.
This has been applied to a few projects I’ve worked on or even phases of projects, and it has been effective in proving out ideas by testing their biggest potential weakness. I will share here in a very general, non-technical way how this strategy saved us a lot of time for a feature that failed, and development was paused.
I was asked to help out on a feature that had been experimented with, and they had already finished an initial feasibility study. They had moved on to a more generalized feasibility with the aim at implementation. Additionally, this feature had a high level of visibility, and there were some people very interested in its success. In fact, I was put on this project between my transition from the Watch to Face ID when Face ID was entering the beginning of the mission critical phase. There was a squeeze on resources, so I had an opportunity to help out.
I worked through a list of potential aggressors with my boss and my EPM, and for each category of aggressor, I developed a data collection guide for feasibility, initial data, and large scale data collection. Despite delusions of grandeur in my ability to improve the algorithm, one aggressor came out on top, let’s call this aggressor #1.
I started planning with my EPM, and we went about designing a user study, getting someone to write the app for data collection, another group to run the user study, sourcing users, pulling resources, and a location. I also did failure analysis on whatever data we had available. I root caused one problem to be an environmental factor which had to be controlled in my DOE.
I knew the experiment would be fairly costly, and we only had a short time window, so I worked hard to identify and control every variable. This meant going to see the site multiple times, rehearsing, having other people rehearse, and visualizing what variables I might see in the data after the fact, and multiple dry runs. This process of DOE along with other coordination factors took two months to complete.
Finally, the time had come. The experiment started, and even though we had a few hiccups, we got more data than planned. It was very well controlled, and whatever the outcome, we had confidence in knowing it would be conclusive. The only bummer was that the next week, I fully transitioned to Face ID. I trained another engineer to take over, and it took a few months to finish the analysis because the dataset was so large.
Good DOE is never a failure!
The results from the experiment caused the feature to be put on hold indefinitely. It had failed to work under certain conditions that would have made the feature more trouble than it was worth. The experiment paid off though; it failed fast first. We ended feature work at the earliest of knowing it wasn’t feasible. As a result, we didn’t waste time working on something that would have failed in the end. I don’t see failure but rather success at a well designed experiment.