Three Critical Blind Spots Developers Overlook in AI's Impact
An influencer posted the above image
Posts like these say what we want to hear, but I dont think this is accurate.
No, we are not safe :)
This image misses a point ..
The client can ALSO iteratively describe a system.
The first iteration may be wrong ..
But the 10th iteration would be the system itself!
when I see especially openAI o1 preview code gen ability -
it's not so far off I think
I see three blind spots when it comes to the impact of AI on developer jobs
The first, is as above i.e. AI systems, especially systems that can display some reasoning ability, will get better with experience for implementing tasks that need a cognitive component.?
?The bulk of LLM progress until now has been language-driven, resulting in chatbots or voice assistants that can interpret, analyze, and generate words. But in addition to getting lots of facts wrong, such LLMs have failed to demonstrate the types of skills required to solve important problems in fields like drug discovery, materials science, coding, or physics. OpenAI’s o1 is one of the first signs that LLMs might soon become genuinely helpful companions to human researchers in these fields. It’s a big deal because it brings “chain-of-thought” reasoning in an AI model to a mass audience.??
As an analogy, consider Language Translators in the early 2000s. The first Google translate was not perfect. Today, online translation is the default. And Google translate is a traditional machine learning model (i.e. not LLM driven). LLM driven reasoning models are likely to improve a lot more rapidly??
The second blind spot is: Developers are attached to their own tools (which map to their expertise). But AI tools are fundamentally different
Let me share a story - which reflects my age :) Early in my career, I started with relational databases. I loved them and still do. Codd’s Database normalization (and denormalization where needed) is logical and almost mathematical to me.
In my excitement, I told the resident tech expert / guru all about how great SQL was to manage data. He listened to me patiently and then said that he could recreate the whole relational database structure only in SED and AWK .
And the genius that he was, he proceeded to do exactly that! To me, that was technically fascinating but even as a junior developer - I wondered what was the business rationale in the approach. We see the same overall. Developers love their tool / paradigm/ framework that they currently work with since its directly tied to their expertise. That includes Object orientation, functional programming and more recently, RUST.
But AI as a tool for coding is fundamentally different.? Automation of coding is not new. We have seen this from the days of CASE tools (again dates me!) - but I believe AI (especially coupled by some reasoning capabilities) - will improve with experience exponentially.???
The final aspect is: we have not factored the impact of agents yet - Autonomous AI agents perform a task at a higher level of abstraction. Currently, we do not see many systems that follow an agentic workflow design pattern .? - but these are coming and their impact will be huge on many areas where you can apply conventional AI. This is more so when you factor in the impact of multi agents on workflows .
Given a task, you can break it down into subtasks - each of which could have an agent. These agents would be chained together to semi autonomously fulfil your workflow. For example, an agent for product backlog, an agent for user stories etc etc.???
Notes
Here are some personal comments and notes
If you want to study with us , please see our #universityofoxford course on #AI #generativeAI and #mlops . We are also applying these ideas to the erdos community
First is to identify what a software developer does; if it's syntax writing, then that'll be gone. AI needs to directly program the computer; having it generate human language, which is then converted back to machine code, doesn't make a lot of sense. That'll lead to the next generation software developer, which may be closer to an English major, than a math major. I think we'll also see a new generation of genius; that is, those who can create and accomplish things that previously required a long ramp on syntax, knowledge, etc. Creativity is king when the tedium is removed.
Enabling Digital Transformation across the enterprise via process and technology disruptions.
2 个月Certainly some very valid arguments as to how LLMs can model or rather learn the “grammar” for any language, especially highly structured languages that are used for object oriented or even functional programming. What I personally haven’t tried is using LLMs and versions such as chatGPT or Gemini for writing ANSI C code for implementing highly optimised versions of algorithms for scientific computing using either Floating Point or Fixed Point arithmetic combined with Pointer arithmetic for raw data manipulation (remember that any spatial and/or temporal data is essentially a one dimensional ordered data set even if it is organised as a tree or a hashmap or a doubly linked list). Maybe your GenAI tool of choice has consumed all the code ever written for implementing scientific computing even in Assembly Language for DSP chips, but I am skeptical that it can understand “my” implementation let alone suggest improvements. The use cases where it will do reasonably well are rather elementary business computing problems with the real challenge being that the original implementation was a Big Ball of Mud somehow held together by “magic”! For me the power of a simple SQL query in implementing complex data transformations is magic!
Entrepreneur, Business Executive, Investor, Author
2 个月Wise words, Ajit.