Finding The Right GenAI Pitch To Hit
Ever since working at Netscape 1995-99 and feeling what it's like selling new, disruptive technology that only you have and that everyone needs, I've been a disruption peddler. I'd find one of Silicon Valley's latest breakthrough technology startups, join to run sales or part of it when it was still small and then sell using a math proof (or something akin to it) showing that our tech was incontrovertibly better than anything before it.
Resistance was futile. Commissions were good.
Sometimes I picked well, but other times, frankly, I let my own ignorance, greed and impatience for income lead me into a dead-end, futile mirage. Overall, though, my batting average is .454 (5 for 11 positive-ROI exits) as an employee and .571 (4 for 7) as an angel investor. If there's a Startup Hall of Fame, I won't be in it. But I take pride in picking my pitches well and having an above-average slugging percentage.
How you execute the selling motion is important, no doubt, but who and what you sell--and in what game--is equally so.
To wit, several months ago I decided to leave my job running sales at a respectable SaaS company after barely 2 years. The team was strong, it was going well, and we were fighting for and closing some big deals. But if you're in enterprise sales and focused on commissions and equity exits, selling anything other than GenAI/LLM technology, IMHO, is like selling yesterday's news.
领英推荐
I've been educating myself on LLMs and GenAI for six months now, and am attending GenAI Summit SF this coming week. I'll have a few at-bats and will try to wait for a good pitch to hit.
What I'm finding is that while GenAI and LLMs are without question the new technology stack (shoutout to @andrew_ships & @felipe), the new stack's far from perfect. Case in point: 217-year-old scientific publisher John Wiley & Sons recently had to retract 11,300 published papers after it found that they were rife with AI fraud. According to CS researcher Guillaume Cabanac, it appears fraudsters used GenAI to plagiarize existing academic research and replace key scientific terms with synonyms using automatic text generators. This led to "breast cancer" becoming "bosom peril", "fluid dynamics" became "gooey stream" and "artificial intelligence" became "counterfeit consciousness" (last one's not far off IMHO).
Yeah, it's funny. But it's also frightening to see GenAI-wielding hucksters undermining public faith in the bedrock of peer-reviewed science. Most importantly, it shows that LLMs and the Python utilities layered on top of them will only be ready for enterprise use when the right level of human training and oversight is layered on top of them to prevent Monty Python-esque outcomes. Could you imagine something similar playing out inside a company, across a partner ecosystem, in customer-facing marketing or support? Enterprise GenAI startups will have to strike a balance between unleashing the full power of LLMs, and maintaining a firm human grip on outcomes.