"Should" is a Moral Imperative and Should Not be Used in Robust Engineering Design
Travis Mallett
Bridging Management Theory & Practice | Harvard ALM Graduate | Voice Behind 'The Management Theory Toolbox' Podcast
Yes, I get the apparent irony in the title, but let me explain...
When was the last time you heard someone on your engineering team say about a design or idea, "yeah, that should work?" Or when was the last time you said something like that yourself? To be sure, I've used this way of speaking regarding engineering problems on many occasions. But there's danger in using this type of informal language in an engineering setting where robust designs are imperative.
"Should" is a Moral Imperative
One of the problems with talking this way is semantic. First, "should" is used to indicate obligation, duty, or correctness. That is, it has moral or ethical implications.
But that amplifier circuit you were discussing in the hallway with a colleague has no moral obligations to anything—it's amoral. Speaking as if it does have intrinsic moral properties and obligations ever so subtly shifts the responsibility of ensuring its proper operation away from the engineer who is ultimately responsible. Even if we never consciously think of it this way, I suspect that using "should" in this context allows us to distance ourselves from the end result. When something goes wrong, the common response, "it should have worked!" is a convenient, yet likely subconscious, way of abdicating ownership.
Of course, I'm not saying that anyone purposely shifts blame to inanimate objects when things go wrong. But, there's plenty of evidence in psychology that our brains do this naturally as a survival tactic.
"There is considerable evidence in the fields of psychology and linguistics that the way we use language can directly impact how we view the world and act in it."
You might argue, "This is just a semantic issue, why get all bothered about an informal use of language? We all know what is actually meant when someone says a circuit design should work!"
I'm tempted to agree and usually couldn't care less about the "proper" way to use a word (words are all made up anyway, and their meanings evolve over time), but in this case I think we should (yes, I'm arguing that we are morally obligated to!) consider the implications of our language and how our language shapes the way people think and behave. After all, there is considerable evidence in the fields of psychology and linguistics that the way we use language can directly impact how we view the world and act in it.
So what might we say instead? I think we usually mean "probably" when we say "should" in an engineering context. Instead of saying "that should work," we might substitute with "that will probably work." Additionally, we might say to ourselves (but maybe not others!) "I should have been more careful in my design," when something goes wrong. This clarification of the semantic issue may help bring the conversation back to what we can do to improve our design processes or our individual work methods and habits.
Although speaking in terms of probability rather than moral imperatives may help lessen the potentially harmful effects of the semantic issue, I argue that's only a small part of the solution.
Our Brains Do Not Intuitively Grasp Probabilities
Suppose we all started to say what we likely consciously mean anyway and systematically replaced "should" with "probably." Maybe there would be some benefits, but I suspect they would be severely limited because "probably" isn't a quantitative term. Saying that a design will "probably" survive thermal aging is not much better, when it comes to nailing down a robust design, than saying it "should."
领英推荐
Consider, for example, the goal of Six Sigma, which is to produce a defect-free product 99.99966% of the time. That's only only 3.4 errors per one million.
At our best, humans typically have an error rate of 5 to 10 in every hundred opportunities. The difference between that defect rate (again, this is at our best) and Six Sigma Performance measures is vast. And just as we have a difficulty comprehending the vast scales of the universe, from nanometers to billions of lightyears, we are notoriously bad at intuitively estimating and interpreting probabilities—especially on the scales needed for robust engineering.
Not only are we bad at estimating and understanding probabilities on the scales needed for robust design engineering, but those with significant experience in their art who have developed a good sense of the likelihood of various scenarios may inadvertently mislead less experienced engineers. For example, suppose you've evaluated a particular design a multitude of times. It's possible that after many years of experience with this design, you have a pretty good intuitive sense of what will "probably" work, and what will cause defects. The problem is, it's not common for such experienced engineers to actually know the failure rate associated with a particular design tweak, even if they have an intuitive gut feel for it. And sometimes, our intuition may be wholly inadequate for representing reality. Additionally, saying, "that will probably work" can lead inexperienced engineers to 1) have an over-dependence on consulting your intuition because they haven't been trained to calculate the probabilities, and/or 2) become overly confident in their ability to use the phrase, "yeah, that will probably work."
"I Don't Know, Let's Calculate It"
The only reliable method that I can think of to increase the probability that we develop an accurate intuition regarding probabilities in our work is to calculate the probabilities on a regular basis.
If your worst-case tolerance analysis shows that there is a little bit of overshoot on a particular parameter (maybe the maximum output voltage is, worst case, higher than the maximum input voltage of a receiving device), don't just say, "yeah that should be fine...that corner case will probably never happen." Instead, spend a bit of time to calculate the probability. Make a few assumptions about the statistical nature of the parameters in the circuit, do a root-sum-square (RSS) analysis, run a Monte Carlo simulation—anything to put a reliable number to that probability. Who knows, you might be surprised. Maybe the resulting failure rate is higher than you thought it would be, and maybe the extra effort will lead to quality improvements in the design.
And even if the only thing that happens from the analysis is that you confirmed the gut instinct you've already developed, that's great! Doing this regularly will keep your intuition trained, tuned, and calibrated by reality.
And what of those less-experienced engineers who consult your opinion on a design? Perhaps try this response: "I'm not totally sure what the defect rate of that design question is. Let's calculate it!" The effort may reap valuable rewards, both in the immediate problem at hand and in future designs the new engineer will work on.
Should "Should" Be Removed From Robust Engineering?
I assert that those of us who choose to work on products and designs for mission-critical applications, especially those which have life-and-death implications, we are morally obligated to pursue design practices worthy of being called "robust engineering." And if using the word "should" in this context subconsciously decreases our ownership of the design and process, if it is a shortcut around performing valuable probability analyses, and if it decreases the quality and reliability of our designs, then should "should" be removed from our vocabulary in this context?
Yes. Yes, it should.
Associate Hardware Engineer
2 年This post makes me thing of one particular MLK quote: “Our scientific power has outrun our spiritual power. We have guided missiles and misguided men.”
Master of Electrical Engineering
2 年I’m tempted to make a bunch of fake LinkedIn accounts and like this article 1,000 times over, but I know I SHOULDN’T! This is gold. Thanks for sharing Travis.