A lot of studies and programmers are concluding that generative AI leads to problems in coding. “One study from Stanford found that programmers who had access to AI assistants "wrote significantly less secure code than those without access to an assistant. Another study from researchers at Bilkent University in 2023 found that 30.5% of code generated by AI assistants was incorrect and 23.2% was partially incorrect, although these percentages varied among the different code generators.” (Similar results were found by Purdue researchers, which I posted last week) Research from code reviewing tool GitClear found that the rise of AI #coding assistants in 2022 and 2023 correlated with the rise of code that had to be fixed two weeks after it was authored and if the trend continues in 2024, "more than 7% of all code changes will be reverted within two weeks." That is a lot of extra work to find and fix the code. When ZDNet put general purpose chatbots through a series of coding tests (like "write a Wordpress plugin"), Microsoft Copilot, Meta AI and Meta Code Llama failed all of them. (Google Gemini Advanced and ChatGPT passed.) “Programmers sense there's trouble. “Alastair Paterson, CEO of Harmonic Security, told Axios that many of these models have equivalent skills to a junior developer, but they also can make different kinds of mistakes. The large language model approach is fantastic at some tasks and less good at some other things that you'd think it would be really, really good at. They make strange logical errors in numbers and loops," and are incapable of doing complex architectural decisions. “Right now, bad AI-generated code that's not caught by a human usually just makes for messy code libraries or minor problems rather than disasters. But, Lee Atchison, Former Amazon technical program manager and author of the O'Reilly book "Architecting for Scale," wrote in March that "code complexity and the support costs associated with complex code have increased in recent years in large part due to the proliferation of AI-generated code use." In other words, generative AI tools might save time and money up front in code creation and then eat up those savings at the other end.” My take: I think a lot of people are saying this. What about those autonomous agents that Sam Altman and others are optimistic about? "Looking ahead a few years is probably where I would worry," Paterson said. "If you've got autonomous actions being taken by some of these agents that are under full AI control with limited human input, I think that is where things start to get more interesting." The other side: "I think we're some way off from some sort of #AI apocalypse," Paterson says. "These tools ultimately are still just tools, and we've got a pretty good understanding of their limitations." #technology #innovation #startups #artificialintelligence https://lnkd.in/gaxJhmDC
分享动态