Word Calculators and Legal Calculations: Reflections from the General Counsel’s Chair
Markus Hartmann
Chief Legal Development Officer @ DragonGC | JD MBA Colonel, USMCR (Ret.)
When I became a lawyer, I did so under the noble pretense that I was joining the intellectual cavalry—armed with precedent, rhetoric, and the occasional Latin phrase to fend off chaos. Numbers, algorithms, and anything resembling math were supposed to remain comfortably out of view, quarantined in spreadsheets managed by Finance. Yet, here we are, confronted by large language models (LLMs)—the "word calculators" that have infiltrated our domain, challenging our assumptions about what it means to practice law.
For those of us who spent our careers mastering language rather than equations, LLMs present an ironic twist: the biggest technological advancement in legal practice isn’t about processing numerical data—it’s about manipulating the words we thought set us apart.
Calculators recognize numerical patterns—an innocuous party trick that keeps accountants awake and lawyers bored. But LLMs take pattern recognition to a new level, scanning oceans of text to detect linguistic rhythms, nuances, and legal phrasing. They understand that "subject to" signals a clause ripe for negotiation and that "notwithstanding the foregoing" is a prelude to someone pulling a fast one.
From the General Counsel’s chair, this is both thrilling and humbling. The junior lawyer who once spent an entire weekend parsing through contracts to find those subtle patterns? That’s now an hour’s work for an LLM, and it won’t ask for a coffee break or an office with a view.
This creates a strategic opportunity: We can deploy LLMs for reconnaissance while staying focused on the high-level strategy. The key is ensuring that the machine doesn’t mistake a routine covenant for a material breach. And if it does, well, that’s why the judgment of an experienced GC still matters.
Calculators handle complex equations without breaking a sweat—at least metaphorically. LLMs are similarly unfazed by labyrinthine contracts and compliance manuals that would make a new associate cry in the copy room.
For in-house counsel, complexity is our daily bread. A regulatory change, an acquisition agreement, or a shareholder dispute lands on our desks like a tangled ball of yarn. LLMs, however, can untangle the mess faster than we can print out the executive summary. Need the key takeaways from a 200-page environmental audit? An LLM will churn out a summary before you’ve finished rolling up your sleeves.
But here’s the catch: while LLMs can easily slice through complexity, they lack the nuance to distinguish between the merely inconvenient and the legally catastrophic. The last thing I need is for a machine to skim over a clause that triggers a nine-figure indemnity obligation because it didn’t appreciate the subtext.
Calculators predict outcomes based on formulas—enter the variables, and voilà. LLMs similarly predict the next word, the next sentence, and even the next argument in a negotiation. You start drafting an email with "Given our recent conversation..." and the LLM knows you're about to reject a vendor's proposal in the politest way possible.
As GC, this capability is a double-edged sword. On the one hand, the predictive text helps accelerate the grunt work of drafting mundane communications: board updates, policy summaries, and vendor disputes. On the other hand, it risks reinforcing clichés and predictable language. I can’t afford to sound like I’m on autopilot when explaining to the CEO why a regulatory investigation is “manageable” but still “requires significant attention.” The machine may finish my sentences, but only I can tailor them to the moment’s gravity.
领英推荐
Calculators prevent errors in arithmetic. LLMs, likewise, act as our linguistic safety net, catching typos, dangling modifiers, and ill-advised legal jargon. But every safety net can become a tripwire.
LLMs may spot a missing comma in a list of indemnified parties, but they don’t "feel" the weight of omitting a key affiliate. They’ll flag a spelling error but won’t flag the political landmine of accidentally sending a "friendly reminder" email to the CFO when a "gentle nudge" was more appropriate. If there’s one thing in-house counsel learns early, it’s that some corrections require more than grammar—they require diplomacy.
If calculators freed mathematicians from basic arithmetic, LLMs would free lawyers from the tedium of drafting and redlining the same clauses repeatedly. Do you need a confidentiality clause 95% identical to the last 12 you drafted but with one custom wrinkle? The LLM has you covered.
This is a game changer for in-house teams where resources are stretched thin, and timelines are compressed. Instead of reinventing the wheel, we can hand off the routine tasks to our linguistic co-pilot and focus on strategy, risk assessment, and stakeholder alignment. However, I’ve learned that automation doesn’t absolve us of vigilance. An LLM might optimize a non-compete provision beautifully, but it won’t know that the CEO just told the board that non-competes are "off the table for cultural reasons."
Calculators don’t understand the context; they just process numbers. LLMs don’t truly understand meaning; they mimic it. They may write a flawless memo on the latest SEC regulation, but they don’t know why the regulation keeps the General Counsel up at night. They can predict phrasing, but they can’t predict boardroom dynamics or the look on the CFO’s face when she hears the word "contingent liability."
As in-house counsel, context is everything. A two-page contract amendment can carry more strategic significance than a hundred-page master agreement. The machine doesn’t see that. We do. That’s why the value of an LLM isn’t in replacing human judgment—it’s in amplifying it.
Much like calculators didn’t make accountants obsolete, LLMs won’t make lawyers irrelevant. They’re tools—powerful, efficient, and sometimes unsettlingly good. But they’re only as useful as the person wielding them. Our role as General Counsel isn’t diminished by these "word calculators"; it’s elevated.
We now have the opportunity to reallocate our time—less redlining, more board strategy, fewer typo hunts, more risk mitigation. But we can’t abdicate responsibility to the machine. After all, when the audit committee asks hard questions or the CEO calls for a late-night consultation, it won’t be the LLM that takes the heat. It’ll be us.
So, while LLMs may draft the first version of our work, it’ll still be our name in the email signature, our voice in the boardroom, and our judgment that shapes the final decision. And if that means we’ve got to make peace with our new "word calculator" co-pilot, so be it. Because at the end of the day, legal leadership—like good piloting—isn’t about the tools. It’s about the pilot.
Law Clerk @ Smith Clinesmith (Pending Bar Admissions for NY and PA) | Legal AI Evangelist | Company Builder | Extreme Athlete
1 个月This is awesome. I love this analogy. LLMs, like the introduction of emails and the internet, are simply new tools that are going to change the game.
Partner, Client Services Tenor Legal
1 个月Perfectly said, Markus. The tools empower lawyers to be strategic business partners. And LLMs have been game changers since the advent of computer assisted legal research (way back when).
freelancer
1 个月legalpdf.io AI fixes this "LLMs reshaping legal profession dynamics."
Deputy GC @ EEX | Co-Founder of TalkingTree.app | ex-Meta
2 个月I love this, especially when you said that LLMs are "a game changer for in-house teams where resources are stretched thin, and timelines are compressed." It's a shame that some have picked up a narrative about LLMs replacing lawyers, when they're supposed to be our companions. With LLMs, legal education and operations become more accessible, and the much needed reprieve from grunt work becomes real.
Partner @ Latitude | Providing top-tier flexible legal talent to legal depts & law firms nationwide | Bradley & Venable alum
2 个月Love this Markus. As someone smarter than me said, AI won't replace lawyers. But it might replace lawyers who shun AI.