The Paint-by-Numbers Anti-Art of Modeling

The Paint-by-Numbers Anti-Art of Modeling

I recently listened to a Gurobi webcast about generative AI. I can’t remember anything about it, other than Irv Lustig casually dropping a bit of wisdom that was apparently unappreciated by everyone else.?

About three-quarters of the way through, Irv mentioned that his team always builds a solution validator as part of his optimization projects. A solution validator checks whether the results of a Mixed Integer Program truly satisfy the business requirements. It’s essentially a debugging tool, one that only raises a red flag if there’s a bug in translating business rules into mathematical equations.

I’m willing to bet that there isn’t a single IEOR professor who emphasizes the importance of building a solution validator. If so, that's a shame, since I agree with Irv that they are critically important. I’ve been building optimization models since the mid-90s, and I’ve always implemented a solution validator. When I take over someone else’s project, the solution validator is typically the first thing I add.??

It’s possible the only people building them are me, Irv, and people managed by Irv and pestered by me. If so, that makes us “the happy few” that bring others to hold their models cheap.?

I’m not sure what prompted Irv to build solution validators, but I can tell you my origin story. I was a computer science major, not an IEOR major, and my department wouldn’t let me graduate without reading The Mythical Man-Month. I liked that book so much that I read two or three others in the same vein, and one of them said, “When you have to build a complex algorithm, build two different implementations, and let them test each other.” Since I love writing code, I was eager for any excuse to do more of it, and this seemed like the best one I could find.

Why is a solution validator so important? Because it double-checks your optimization logic, shaking out math programming bugs before your code is subjected to the harsh glare of client scrutiny.

Irv is right that the first step, when your solution validator cries foul, is determining whether the bug lies in the optimization code or in the validator itself. In my experience, the former is more common (since the validator is usually easier to write), but sometimes it is indeed the latter.

This is the nature of “writing code to test your code”: you also have to debug that additional code. It’s tempting, when you discover the bug was actually in the support code, to feel as though you wasted your time chasing the illusion of a production code bug.

That’s the wrong conclusion. The real time-wasters are the bugs that reach the client. Almost as bad as the erosion of client confidence are the elaborate discussions and far-fetched explanations that occupy the entire team once a bug has slipped through. The time spent ensuring the debugging code agrees with the production code is trivial by comparison.

The solution validator falls under a term I’ve recently coined: the “anti-art” of modeling. The art of modeling is a much sexier idea. There was a time when I’d happily natter away in “art of modeling” discussions. Such a topic lets you wave your arms around and share strong feelings. But after a while, it starts to feel like debating whether apple pie is better than chocolate cake. Like Robin Williams in Good Will Hunting, you’re ready to cut your own ear off and move to the South of France.

You realize what you really want is “paint by numbers.” You want the liberation of a checklist: do these specific things, and your project will be good enough. Will it hang on the walls of the Louvre? No. That’s not your concern. Your concern is hanging it on your office wall without feeling embarrassed, because embarrassment is what you’ll feel if the white-hot spotlight of client scrutiny exposes your math bug to the entire team.

It’s tempting to think you need to tap your inner genius to avoid that outcome—to ponder the art of modeling and find the Will Hunting inside you. But you don’t. You just need to connect with your inner Sean Maguire.

Will: “Did you read all these books?”?

Sean: “I did.”?

Will: “It must have taken you a long time.”?

Sean (with pride): “It did take me a long time.”??

?

That’s exactly how I feel when I look over my commit history—the commits for getting the production code working, and the commits for making the testing code agree. It does take me a long time. And it’s time well spent.

Aster Santana

Founder/CEO at Mip Wise - Decision Scientist - Building intelligent solutions to drive optimized results.

2 天前

Peter Cacioppi, I know another OR folk that loves building validators: Rabie NAIT-ABDALLAH, Ph.D.

Michael Watson

Teaching and writing about all things AI, operations, and my startup experience

1 周

Peter Cacioppi This is a good article. I'd love to explore this topic with you more. I have a small project going on at NU that is close to this idea. That one was inspired by Rabie NAIT-ABDALLAH, Ph.D.

This is essentially the testing functionally you are talking about - right? If we put enough rigor in the test scripts to reflect the requirements, that should suffice. We follow a rigorous sign off process to ensure the requirements are satisfied and demonstrated to the end users.

要查看或添加评论,请登录

Peter Cacioppi的更多文章

  • My two cents on LLMs and Optimization

    My two cents on LLMs and Optimization

    I’d like to weigh in briefly on a Harvard Business Review (HBR) article by former colleague, Prof. David Simchi-Levi…

    9 条评论
  • Blood, sweat and tears

    Blood, sweat and tears

    Jean-Francois Puget, a friend and former colleague, recently kicked off an interesting conversation with the following…

    3 条评论
  • Building Pythonic MIPs with AMPL

    Building Pythonic MIPs with AMPL

    Update Sadly, as of June 2021, AMPL has failed to follow through on their early promise of connecting their modeling…

    4 条评论
  • Miami Metrorail meets Python

    Miami Metrorail meets Python

    Tallys Yunes recently posted his formulation of the "Buying Metrorail Tickets in Miami" optimization problem. His…

  • Connect OPL to Python with ticdat

    Connect OPL to Python with ticdat

    FYI Subsequent to this post, I've worked with combining AMPL and Python and have found this approach works even better…

    1 条评论
  • ML and MIP Gurobi Webinar Follow Up

    ML and MIP Gurobi Webinar Follow Up

    I recently had the pleasure of collaborating with Dr. Daniel Espinoza on a Gurobi webinar discussing Machine Learning…

  • We few. We happy few.

    We few. We happy few.

    I admit to having strong opinions. As I’ve mentioned in previous blogs, I think Python is the right programing language…

    2 条评论
  • Why Python for MIP? Four Key Points

    Why Python for MIP? Four Key Points

    FYI - I've since recanted the dogmatic positions outlined below. Please go here https://t.

    4 条评论
  • Fantasy Footballers are Nerds Too

    Fantasy Footballers are Nerds Too

    There are two types of nerds. Fantasy football nerds, and Dungeon and Dragons nerds.

  • Retire the Five Grand Old Men

    Retire the Five Grand Old Men

    Every two years, INFORMS gives out an Impact Prize “to recognize contributions that have had a broad impact on the…

社区洞察