What's the next development model?
I was thinking about this a little when I did my talk for Build a month back. There have been several compute models in my lifetime: mainframe, desktop, web, mobile, cloud…and now AI.
Roughly speaking, each one generates some kind of programming paradigm that solves a dominant problem in that generation. Mainframes started with assembly but did a lot of work on compilers and languages as programs got more complex and too hard to manage. Desktops were tightly bound to physical media so ship cycles mattered, and we got waterfall programming. Web lost that constraint so we got agile - quality got loose for a while and then started to get automated, so when mobile/cloud showed up, we were ready for full CI/CD and more automated dev lifecycles. Monitoring, debugging, refactoring, environments, code reuse (OSS) all evolved alongside as well.
AI seems like another shift. Not just AI in the sense of building/training/hosting a model. That of course has it’s own set of tools and workflows. Here I’m talking more about the application level. What are the patterns there? We’ve seen a little bit of it already with things like prompt engineering, but that feels transient to me.
I can see some obvious challenges. We are building things with very open-ended behavior so monitoring probably needs to be at the same level of “semantic telemetry”. “Is the agent being nice, friendly, is it stuck on a step of a task, etc”. How do you do regression testing as base models change? How do you decide where to run an inference, in real-time, for the right cost/quality tradeoff?
领英推荐
At a higher level, I wonder what the right strategy for code interop and reuse is. Is it all just natural language? That seems inefficient, but falling all the way back to rigid schema seems to lose some of the point and power of LLMs. What’s the equivalent of a package or gem in the world of LLMs? Do we package memory and expertise for reuse? Is that a pipeline or do we actually merge memories or weights directly into models? How do we test and manage all of this?
There’s also some tantalizing re-entrancy. Models understand code - if we give them full access to all of the dev tools, how much can they manage? Is the next step past serverless “codeless” or something like that, where the LLM is a full partner, and teams abstract up another level and mostly work on intent? Or, maybe our dev tools just get much smarter - maybe debugging becomes “this is doing X, please figure out why that is”, telemetry becomes “tell me what users don’t like” etc.
I don’t know if it’s possible to spot this ahead of time from first principles. I remember how controversial even agile was at first. What mostly happens is lots of teams try lots of things, and the best techniques emerge, like natural-selection.
Whatever the answer, it seems pretty clear that the practice of development is about to radically change again. Another step up the ladder!?
Cloud | DevOps | SDLC | MLOps | AIOps
1 年The adoption of model powered apps with semantic kernel or oss like langchain is still slow, I wonder if we gonna have another micro services moment where people overnight switch to the new model, or it will take time to propagate .
Product Leader & Builder, Entrepreneur, Startup Advisor, Investor, Creator, Learner. Intensely interested in Music technology and production
1 年Great article… I’ve been thinking about this too - and specifically how the current human-in-the-loop models, to protect against negative stochastic results may end up being the driver of this next wave, perhaps with a two-pass model, the first doing what it does today, the second checking results and providing better citation and credibility scoring… QA models basically. Or perhaps your point about packaging expertise can help… fun times!