How AI Systems That Understand and Generate Code Can Improve Software Development

How AI Systems That Understand and Generate Code Can Improve Software Development

AI and machine learning (ML) systems are gaining more traction among large enterprises by the day. And it’s not hard to see why: ML models help organizations scale and improve various tasks, from?medical device manufacturing?to?predicting Leukemia relapses.?

Other AI models, such as?diffusion and generative models, can generate synthetic voices, videos, or images.

The latest emerging AI trend? Deep learning (DP) techniques for natural language processing (NLP) models that understand and even generate text for tasks such as language translation, blog writing, and even creating computer code for software and other applications.?

AI making inroads in the enterprise, but with room to grow

Recent research?shows most companies are relative novices in the AI game, with a 2022 Accenture survey indicating that nearly 65 percent of companies are “AI experimenters” that “lack mature AI strategies” and capabilities.

Only 12 percent of companies were considered “AI achievers” – organizations with differentiated and operationalized AI strategies.

That means there’s still plenty of opportunity for enterprises dabbling in AI to scale and become more efficient in several ways.?

Indeed,?additional research from McKinsey?shows that nearly 30 percent of respondents can attribute at least five percent of their earnings to AI – an increase of five percent from the year prior.?

Most companies use AI models for service or operations optimization, product enhancements, contact center automation, and product feature optimization. Hidden within these broad categories, however, are an increasing number of enterprises that now use AI-based language models to help developers write code more efficiently.

Types of AI models used for code development

The application of classic NLP models?to help developers generate source code and improve productivity isn’t all that new. Domain-specific language-guided, probabilistic grammars, n-gram language models, and simple neural program models have been in use in this regard for years, but are limited in terms of flexibility and time required to configure to specific languages or tasks.

But there’s a?new crop of NLP models?on the block, based on DL techniques, making massive inroads in code development. These include large language models, fine-tuned language models, and edge language models.?

Each has strengths and weaknesses, along with specific use cases that make the most sense based on model type.

Large language models (LLM)

Large language models are pretty much what they sound like: Language-based models with very large numbers of parameters (or values that can change as the model learns) and that require massive amounts of text-based training data.

These models include:?

  • AleutherAI’s GPT-J (six billion parameters)
  • OpenAI’s GPT-3 (175 billion parameters)?
  • Megatron-Turing Natural Language Generation (MT-NLG) model by Microsoft and Nvidia (530 billion parameters)
  • Pathways Language Model (PaLM) by Google (540 billion parameters)

LLMs are very flexible and can be configured to perform several operations, including answering questions, summarizing documents, generating original text, and translation.?

To read more, please click here.

要查看或添加评论,请登录

CapeStart的更多文章

社区洞察

其他会员也浏览了