Retire the Five Grand Old Men
Every two years, INFORMS gives out an Impact Prize “to recognize contributions that have had a broad impact on the field.” Rather than reward a specific OR project, the Impact Prize rewards “ideas that are widely used.”
However, there is a risk with an award like this. It takes time for an idea to achieve widespread dissemination. Most technologies, with only a few exceptions, have a natural lifecycle, under which they innovate, grow, and then pass the torch along to newer techniques. Typically, this arc can only be clearly seen in hindsight. The natural tendency is thus to bestow accolades near the high water mark, in just the same way that even the most savvy investors often buy stocks at their peak.
In 2004, the Informs Impact Prize was given to Robert Bixby, Janet Lowe, and Paul Green for the foundation and dissemination of CPLEX. It would be inaccurate to say that this award identified “peak CPLEX”. (CPLEX has a more interesting story than that, which we will touch on briefly here). But it is true that this award came pretty close to the moment of “peak Bixby-CPLEX.” Less than 4 years later, Professor Bixby was no longer associated with the company he co-founded, and was instead leading their strongest competitor - Gurobi.
Personally, I did hear some grumbling that Bixby was simply re-litigating a case he had already won. I was an IBM employee at the time, and a certain measure of hard feelings are human nature. However, right from the beginning, Gurobi made some decisions that broke strongly with CPLEX (and also XPRESS-MP). Gurobi didn’t build their own modeling language. Instead, they promoted something truly innovative - an algebraic modeling API for a general purpose scripting language. Specifically, they built gurobipy, an API for Python.
One can argue that an algebraic API wasn’t completely new at the time - the first such API was probably that built for C++ by CPLEX at least 10 years earlier. Moreover, Gurobi didn’t support a Python API exclusively - they provide support for a range of both general purpose languages and algebraic modeling languages (to exclude, of course, IBM OPL and XPRESS-Mosel).
The visionary idea at Gurobi’s inception was that a rich API and a beginner-friendly general purpose language could combine to provide the best of both worlds. Gurobi posited that there was no longer a need to accept either the limited domain of a modeling language or the steep learning curve and awkward mathematical rendering of a programming language. With Python and gurobipy both of these perils are avoided, and a clear bridge is laid between optimization and the wider world of math programming.
At the time, nobody gave Prof. Bixby (or Ed Rothberg, or Gu) an award for this insight. To the contrary, the very next Informs Impact Prize seemed to endorse the completely opposite vision. In 2012, Informs recognized the founders of the five most prominent algebraic modeling languages - AIMMS, AMPL, GAMS, LINDO/LINGO, and MPL.
We love these five funky old men. But they should retire now.
To be clear, the last thing I want to do is question the wisdom of this decision. To the contrary - anything that increases the accessibility of hands on math programming is a very good thing. The eight men honored in 2012 made a contribution to this goal that is hard to overestimate. They earned their just recognition, and then some.
That said, from a technological standpoint, an “overall impact award” can sometimes serve as a trailing indicator. I believe this is one of those times. Gurobi blazed the trail with an algebraic Python API, and CPLEX has now followed suit by releasing docplex. Gurobi has long extolled the value of Python, and now JFP, on his blog and his twitter feed, regularly evangelizes from the same prayer book. More and more, it seems clear that both of those organizations now agree that the arc of math programming history bends towards easy to learn, general purpose, open source languages, and away from proprietary, domain specific modeling languages.
I think the OR community as a whole should join them, and retire the five grand old men. To be clear, I’m not talking about retiring any flesh and blood men. (My personal views on retirement are consistent with what Elmore Leonard wrote when discussing Pope Benedict’s retirement. “Doesn’t Benedict still love being the pope?”) I’m referring to the five modeling languages referenced above (and Xpress-Mosel as well). These languages are just tools. They have no feelings. We can use new tools, especially when they are much better.
A new generation of OR practitioners is coming of age. They’re not afraid of Python, because they’ve already learned R. When they realize their math brilliance is more readily deployed with the former, and that the learning curve of one prepares you for the other, then they nearly always rise to the challenge and learn Python.
The five old men are even less capable of deploying mathematical brilliance than R. Let’s not encourage this generation to hide their lamps under a bushel. Algebra doesn’t even need one dedicated programming language, much less five. It’s high time for the OR community to join the Machine Learning community, eschew proprietary cul-de-sacs, and embrace Python.