Building to understand: Lessons from an AI hackathon
Vertex Ventures US
We partner with and invest in early-stage founders building infrastructure and SaaS companies.
Welcome back to Vertex Angles, the weekly newsletter from Vertex Ventures US. We’re a boutique venture capital firm, investing in exciting companies across software infrastructure, developer tools, data, security, and vertical SaaS. If you prefer e-mail subscribe here to get Vertex Angles in your inbox every week.
Programming Note: We’re back after an unscheduled break, as the editor of this newsletter took some time off to bond with his newborn daughter. We’re now back to our regular weekly schedule. And, as always, thank you for reading.
This week, Vertex investor Simon Tiu reports back from his recent entry into an AI hackathon, where he got firsthand experience with the current state of coding at the cutting edge of software development.
A few weeks ago, I traded my investor jacket for a developer hoodie and dove headfirst into an AI hackathon. After hearing hundreds of founders pitch their AI ideas, I wanted to build something myself. After all, AI tools are improving and I used to code competitively—how hard could it be? Turns out, very hard. Between debugging stubborn models and wrangling disobedient agents, I emerged with a newfound respect for AI developers and a sharper eye for distinguishing reality from hype.
While AI empowerment dominates tech conversations, there's a stark gap between how people talk about it and what happens when you actually build something. Founders often say, "we'll just fine-tune a model and deploy," as if it's a push-button process. Once I got my hands dirty, I witnessed the dizzying complexity firsthand. Data inconsistencies, unpredictable model behavior, and surprisingly limited tooling turned seemingly simple tasks into hours of trial and error. At the same time, I was also struck by AI's accessibility: tasks that would have taken weeks before were now possible with just a few lines of code.
Inspired by my upcoming 10-year wedding anniversary, I decided to build an AI-native travel app. Building the prototype reminded me of woodworking, a hobby I discovered during the pandemic (that’s my first real project in the photo above). With woodworking, you can watch countless "how-to" YouTube videos, but you'll never truly appreciate why table saws exist until you've tried to make two pieces of wood fit together seamlessly. Building with AI tooling follows the same pattern. Reading technical papers and listening to founders can only teach you so much—it's not until you code something yourself that you encounter the hidden challenges: the edge cases, the unpredictability, and the workarounds that never make it into VC pitches. When my AI travel advisor finally worked after hours of debugging, I felt the same satisfaction as completing a handcrafted piece of furniture. Like my woodworking experiments, the final product wasn't much to look at, but the process of crafting something novel transformed my perspective on AI development and revealed its incredible possibilities.
One of the biggest surprises? AI doesn't behave like traditional software. In conventional programming, the same input reliably produces the same output. But AI has a mind of its own (so to speak). Ask an AI assistant the same question twice, and you might get two different answers. This wasn't just a theoretical quirk; it became a real challenge during development. My AI Travel Advisor would nail a response one moment and fumble the exact same question later. Debugging required a mindset shift: rather than hunting for single points of failure in code, I had to think in probabilities, spotting patterns and designing around unpredictability. It felt less like coding and more like parenting—sometimes you guide gently, and sometimes you accept that bedtime is a flexible concept.
Thankfully, I wasn't coding alone. I had Cursor's agent, an AI-powered development assistant, in my corner. It could autocomplete code, suggest fixes, and explain functions. It often felt like magic. But it wasn't foolproof. Sometimes it generated code that looked perfect but contained subtle errors. I learned to treat it like a smart but unreliable intern—it accelerated development but required constant oversight. This experience crystallized an important investing lesson: AI tools can dramatically enhance productivity, but they cannot yet replace hard-won expertise. The best startups understand when to automate and when to rely on human judgment.
The hackathon left me exhausted but energized, transformed not just by what I had built, but by what I had discovered. This experience reinforced a fundamental truth: the best investments spring from deep understanding. While a compelling pitch can captivate, true investment conviction comes from seeing technology in its raw state, understanding how it works, where it falters, and what it truly takes to bring an idea to life. Now, when evaluating AI startups, I ask myself one essential question: Do I understand this technology deeply enough to see both its promise and its pitfalls? If not, it's time to roll up my sleeves again.
The latest news from the VVUS network:
Vertex portfolio job of the week: Full-stack Software Engineer at Riley
Riley AI is looking for an early engineering hire to work closely with its co-founders to shape the future of the company and make key technical decisions. Riley recently raised a $3 million seed round led by Vertex Ventures US to build its collaborative, AI-powered product insight engine.
Find more jobs at Riley here. For more startup jobs from across the Vertex Ventures US portfolio, check out our jobs portal.