The Paint-by-Numbers Anti-Art of Modeling
I recently listened to a Gurobi webcast about generative AI. I can’t remember anything about it, other than Irv Lustig casually dropping a bit of wisdom that was apparently unappreciated by everyone else.?
About three-quarters of the way through, Irv mentioned that his team always builds a solution validator as part of his optimization projects. A solution validator checks whether the results of a Mixed Integer Program truly satisfy the business requirements. It’s essentially a debugging tool, one that only raises a red flag if there’s a bug in translating business rules into mathematical equations.
I’m willing to bet that there isn’t a single IEOR professor who emphasizes the importance of building a solution validator. If so, that's a shame, since I agree with Irv that they are critically important. I’ve been building optimization models since the mid-90s, and I’ve always implemented a solution validator. When I take over someone else’s project, the solution validator is typically the first thing I add.??
It’s possible the only people building them are me, Irv, and people managed by Irv and pestered by me. If so, that makes us “the happy few” that bring others to hold their models cheap.?
I’m not sure what prompted Irv to build solution validators, but I can tell you my origin story. I was a computer science major, not an IEOR major, and my department wouldn’t let me graduate without reading The Mythical Man-Month. I liked that book so much that I read two or three others in the same vein, and one of them said, “When you have to build a complex algorithm, build two different implementations, and let them test each other.” Since I love writing code, I was eager for any excuse to do more of it, and this seemed like the best one I could find.
Why is a solution validator so important? Because it double-checks your optimization logic, shaking out math programming bugs before your code is subjected to the harsh glare of client scrutiny.
Irv is right that the first step, when your solution validator cries foul, is determining whether the bug lies in the optimization code or in the validator itself. In my experience, the former is more common (since the validator is usually easier to write), but sometimes it is indeed the latter.
This is the nature of “writing code to test your code”: you also have to debug that additional code. It’s tempting, when you discover the bug was actually in the support code, to feel as though you wasted your time chasing the illusion of a production code bug.
That’s the wrong conclusion. The real time-wasters are the bugs that reach the client. Almost as bad as the erosion of client confidence are the elaborate discussions and far-fetched explanations that occupy the entire team once a bug has slipped through. The time spent ensuring the debugging code agrees with the production code is trivial by comparison.
The solution validator falls under a term I’ve recently coined: the “anti-art” of modeling. The art of modeling is a much sexier idea. There was a time when I’d happily natter away in “art of modeling” discussions. Such a topic lets you wave your arms around and share strong feelings. But after a while, it starts to feel like debating whether apple pie is better than chocolate cake. Like Robin Williams in Good Will Hunting, you’re ready to cut your own ear off and move to the South of France.
You realize what you really want is “paint by numbers.” You want the liberation of a checklist: do these specific things, and your project will be good enough. Will it hang on the walls of the Louvre? No. That’s not your concern. Your concern is hanging it on your office wall without feeling embarrassed, because embarrassment is what you’ll feel if the white-hot spotlight of client scrutiny exposes your math bug to the entire team.
It’s tempting to think you need to tap your inner genius to avoid that outcome—to ponder the art of modeling and find the Will Hunting inside you. But you don’t. You just need to connect with your inner Sean Maguire.
Will: “Did you read all these books?”?
Sean: “I did.”?
Will: “It must have taken you a long time.”?
Sean (with pride): “It did take me a long time.”??
?
That’s exactly how I feel when I look over my commit history—the commits for getting the production code working, and the commits for making the testing code agree. It does take me a long time. And it’s time well spent.
Founder/CEO at Mip Wise - Decision Scientist - Building intelligent solutions to drive optimized results.
2 天前Peter Cacioppi, I know another OR folk that loves building validators: Rabie NAIT-ABDALLAH, Ph.D.
Teaching and writing about all things AI, operations, and my startup experience
1 周Peter Cacioppi This is a good article. I'd love to explore this topic with you more. I have a small project going on at NU that is close to this idea. That one was inspired by Rabie NAIT-ABDALLAH, Ph.D.
This is essentially the testing functionally you are talking about - right? If we put enough rigor in the test scripts to reflect the requirements, that should suffice. We follow a rigorous sign off process to ensure the requirements are satisfied and demonstrated to the end users.