Unlocking Generative AI To Solve Software Development Problems
FireGroup Technology
Join us in empowering millions of global e-Commerce merchants. #FireGroupCareers #GlobalStageToExcel #CultureForGrowth
I. Introduction
Unlocking the potential of Generative AI (Gen AI) requires a clear understanding of its capabilities and limitations. While this article offers a comprehensive overview, we'll delve deeper into the core concepts to dispel any confusion.
What is Gen AI?
Generative AI is a subset of artificial intelligence that involves training algorithms to produce new data with similar characteristics to the data they were trained on. It actively generates new content that aligns with the learned patterns. This content can encompass text, images, videos, audio, or other formats. The generated output is guided by user input, providing prompts or constraints for the AI to follow.
Before generative AI and Large Language Models (LLMs) became popular, there were also many machine learning models that served specific purposes. For example:
These models are trained for specific tasks based on human-labeled datasets. Their strengths lie in accuracy and precision within narrow domains, but flexibility and broad generalizability remain limitations.
In contrast, generative AI, often embodied by large language models like LLMs, takes a different approach. Instead of tackling individual tasks, it aims to learn underlying patterns and relationships within massive, unlabeled datasets. This holistic understanding enables flexible application to diverse tasks, including text generation, translation, and even creative writing.
In other words, its ability is singular: to fill in the blanks by matching the input with the patterns it has learned. But it is precisely this simplicity that brings about great application potential because the "blank" can be anything: class, summary, sentiment, code.
Bonus: Today's LLMs are built on the Transformer model, introduced by Google in 2017. Transformer is based on the attention mechanism to process input data, allowing the model to focus on important parts of the input data, reducing the resources needed for training and inference.
Context: The Master Key
Imagine you ask to finish this sentence, "He is in..."; without further details, the options are endless. But add "he is in ..., because today is the first day after summer vacation," and suddenly, the picture becomes clear: "school."
Context isn't just about words; it encompasses your understanding of language, the relationships between concepts, and the broader situation. The more nuanced your context, the more accurate and relevant the output from Gen AI becomes.
However, providing more context isn't without its costs. Increasing the number of parameters in an LLM to handle diverse contexts requires more computational power. But the trade-off is worth it: a deeper understanding of context leads to less confusion and more accurate, meaningful results.
The Double-Edged Sword: Hallucination
One of the biggest problems with AI is hallucination. AI can generate content that seems very convincing, very confident, and certain but is actually wrong. This phenomenon is called hallucination.
The cause of this hallucination is the training data containing incorrect information, not being diverse enough, containing bias, and the design of the model not being good.
In the early days of Gen AI, hallucination results occurred more frequently and later improved significantly.
In the use cases below, you will notice that there are many cases where AI generates incorrect results, code that does not run, or worse, wrong logic but is still very confident. So it is important to always be suspicious when using Gen AI.
On the user side, there are a few ways to reduce hallucination, such as ReACT and RAG. Understanding this will help us be proactive and less disappointed when using AI. In the examples below, the results will be much better when we provide more context and better prompts.
II. What challenges do we face in programming?
Programming thrives on challenges. Solutions like languages, frameworks, and tools emerge, but these often become future hurdles, necessitating ongoing learning and adaptation.
Two of the most frustrating things about coding are that it can be either too boring or too "hard."
We often have to write the same code over and over again, such as CRUD, lint config, and error checking. This is called boilerplate code, and it can be tedious and difficult to maintain.
By "hard," I don't mean challenging; I mean unnecessarily difficult, such as when trying to understand requirements or communicate with others. It can take a lot of time to understand the nuances of language in refinement meetings or to decipher poorly written documentation. Understanding the requirements is crucial; how can we code effectively without it?
All of these roadblocks can have a negative impact on productivity, code quality, and maintainability. They can also make programming less enjoyable and less rewarding.
Perhaps the programming experience will never be perfect. There is no one language that is both sweet and powerful. However, with the advancement of AI, there is hope for real improvements in this area.
How can LLMs help us solve these challenges?
As mentioned above, LLMs can now understand context and generate anything that is left blank. So what can it help us with in programming? In this article, we will take a look at some typical usage scenarios, such as:
III. AI-powered story writing and analysis
Problem
Good user stories are essential for the success of a sprint, but there are two common problems:
There are many reasons for these problems, such as lack of project knowledge, poor communication skills, or time constraints. However, even if the problem is due to other factors, if you don't understand the story, it can lead to rework or difficulty working with other developers.
Solution
As shown in this video, AI can make it easier for the entire team to write and understand stories.
Atlassian's product is only available for a limited number of users at this time. However, imagine if it had additional features, such as the ability to integrate with the code base on GitLab, automatically break down tasks for developers, ask follow-up questions, brainstorm with the product owner, suggest potential problems, or even write test cases.
These additional features would make AI-assisted story writing and analysis even more powerful and useful. They would help to improve communication and collaboration between developers, product owners, and other stakeholders.
Demo
The best way to use AI-assisted story writing and analysis at the moment is to use Github Copilot Chat. It currently supports JetBrain's IDE and VS Code.
Final: idea.md
From this point, we will ask AI to create requirements based on this idea
Now break it down in to tasks:
IV. AI for System Architecture Design
Problem
System architecture design is an important and complex process that requires deep professional knowledge and sharp decision-making skills. The system architecture must meet the requirements of performance, security, flexibility, and scalability. At the same time, the system architecture must be suitable for the business's characteristics, be able to adapt to changes in requirements, and be able to integrate tightly with legacy systems.
Solution
The application of AI in system architecture design can be very helpful, like searching for information on Google or reading books, but more direct and time-saving. Here are some ways that AI can help:
Demo Draw diagram
As a visual learner, drawing and writing are essential for me to solidify my thoughts and output the results of design processes. But manually crafting all aspects of diagrams isn't necessary! AI can jumpstart simple diagrams, letting you focus on the juicy bits. In the example below, I had AI sketch out the collector-database interaction, then the database-analysis interaction, step-by-step.
Take this example: I used AI to build a diagram step-by-step, starting with collector-database interaction, then adding the database-analysis flow. The more context you provide, the closer the output matches your vision. Shoutout to @hey_thien and their awesome usediagram.com tool!
Now, the less exciting part: translating designs into code. We all yearn for ways to make this process less tedious, and AI might just be the answer. Ready to explore?
V. Code Generation
When coding, we use a strange language that humans don't use, a kind of mysterious incantation, which is certainly not comfortable, from ancient times:
Then:
Now:
Another challenge of coding is boilerplate code. Boilerplate code is code that is repeated in many places with little or no change. It can be found in all programming languages, but it is particularly common in languages that are considered verbose, such as Java and C++.
Boilerplate code can be a major source of frustration for programmers. It can take up a lot of time to write, and it can be difficult to maintain. It can also make code less readable and less maintainable.
There are a number of things that programmers can do to reduce the amount of boilerplate code in their code. One option is to use a code generator. Code generators can automatically generate boilerplate code, which can save programmers a lot of time and effort.
Another option is to use a framework or library. Frameworks and libraries provide pre-written code that can be used to perform common tasks. This can help to reduce the amount of boilerplate code that programmers need to write.
However, these things still seem to be not quite satisfying, as they still leave repetitive lines of incantations that require some thought to understand.
With AI, there is a slight difference. You no longer have to save a bunch of long snippets. I remember the first time I used Copilot. I wrote a comment in human language, and the incantation was created. If I didn't like the result, I could find another result, right there in the code file I was working on, which was very convenient.
领英推荐
But today, it is even more powerful when you no longer need to choose different results. Instead, you can directly provide additional context to Copilot to generate a new code snippet that is closer to your expectations.
A very strange prompt, but when provided with a lot of context, such as the position of the mouse, the files that are open, or even upload the file, Copilot can generate a very accurate answer.
Another problem with boilerplate code is that it is difficult to maintain. Although it requires less effort than inheritance, when you need to change a logic somewhere, you have to go find and fix it everywhere. It's not too difficult if you use search/replace. But sometimes the boilerplate needs to be changed a little bit, then at this time only AI can help find it, AI can process the entire codebase to find similar segments and fix them all at once.
There are many levels of code generation. If in the above example we generate from backend code, then below we can generate components for the frontend:
VI. Using AI to write unittests
To be honest, I rarely write unittests. It's quite time-consuming and difficult. I only write them when the function is really important and difficult to integration test. I believe there are many developers like me.
Especially in Test-Driven Development (TDD), it's essential to clearly define expectations before writing the code. While it's the right thing to do, I personally find it challenging and would estimate an additional 50% effort before practicing TDD.
Unittests are often compared to insurance policies; their value becomes apparent when a bug occurs. When everything is working, we hardly see the benefits. However, unittests are a best practice and are required in enterprise applications.
There is a saying: "Do whatever you don't want to do, let AI do it."
As demonstrated in a recent example (provide link or context), writing unittests with AI assistance is now much simpler. Of course, there are still some limitations, but they are acceptable and worth it. So, recognizing the challenges of traditional unittest writing, the integration of AI offers a promising avenue for simplifying the adoption of TDD, making it more accessible and widespread.
VII. AI helps write documentation
We often write code better than we write documentation, even though documentation is an important skill. Similar to unittests, documentation is often overlooked during the development process, only revealing its true value when needed for reference. While it may be overlooked, documentation is both a best practice and a policy in enterprise applications. Moreover, writing documentation also helps us to review our system ourselves.
Things we don't want to do, let AI do:
The process of writing documentation with AI is not completely automated. Instead, the documentation process with AI involves developers providing guidance and review, with the AI serving as a valuable reviewer.
Using AI is especially beneficial in cases where documentation needs to be written for an old project or a project that has not been updated for a long time. AI's ability to process the entire codebase and old documentation simultaneously makes it particularly beneficial when updating documentation for older projects. Copilot's ability to assist in rewriting specific sections simplifies the editing process, offering developers more control over content revisions.
VIII. AI Helps Write Commit
Looking at this commit, it is very difficult to reverse to the correct error location, not to mention that a commit may contain more changes than it describes. To improve this problem, there are many practices such as conventional commit, or advising continuous commit and small commit. But developers will always be developers.
With AI, it can analyze the results of git diff and create many commands to help divide the changes into the correct commit, commit with more accurate content, thus giving out a beautiful, easy-to-understand and reverse git tree.
In this demo, I did two different tasks and forgot to commit. Github Copilot helped me create two separate commits for each change.
However, this is just a simple example. Currently, it still has many limitations, for example, if you make two different changes to the same file, no one can help you. When AI is cheaper and faster, it can process your code in real time, then it can help you in this difficult case.
IX. Review code
This is a best practice in software development. After passing the testcases, lint rules, the logic cannot be reviewed by the rules. But it is also an important part of the codebase. Code review helps ensure the healthy of the codebase over time[1].
If anyone has ever done code review, they might agree that code review is quite difficult. The hardest part is how to balance between personal opinion and the healthy of the codebase. We often tend to see our own practices as better, and the logic we don't understand, or is confusing with the way we think is bad. But the problem is that sometimes it is right, sometimes it is wrong.
AI does not have such feelings. Even though it has bias, we can provide our policy for it under the prompt to force it to follow, so the review will be less subjective.
I create an assistant on OpenAI. and provide it with system prompts and some files about Google's code review[2] for retrieval as follows:
I will try to copy a random text on the internet to ask it to review, in fact, we will use code changes in the commit. I will use the following code:
```go
func DiscountEdit(ctx context.Context, shopID int64, discountEdit *model.DiscountEdit, repo *repository.MongodbRepository) error {
order, mcaDirectPurchase, err := lambda_func.GetOrderDetails(ctx, shopID, discountEdit.OrderID)
if err != nil {
if strings.Contains(err.Error(), "order not found") {
return nil
}
return errors.Wrap(err, "failed to get order details")
}
sales := ConvertDiscountEditToSaleReports(*discountEdit, *order, mcaDirectPurchase)
//if err != nil {
// return errors.Wrap(err, "failed to convert order refund to sale reports")
//}
if len(sales) == 0 {
return nil
}
err = repo.SaveItemNew(ctx, sales)
if err != nil {
return errors.Wrap(err, "failed to insert OrderEdit to sell report")
}
return nil
}
Here is the output:
Personally, I think the results are very good. Although lint tools are necessary to analyze code, if they can be combined with AI, the results will be more valuable. For example, in this demo, it is something that lint tools are difficult to do:
But it is very meaningful, it reminds us of business. Is the definition of currencyRate really correct, or perhaps our purpose is not currencyRate but something else.
X. AI helps maintain legacy systems
There is a joke that we quit our jobs to escape the legacy code we created to swim in the legacy code that someone else just left. The joke is true; we are always afraid of legacy code, whether it is ours or someone else's. Especially if it is a codebase that violates all of the above: no documentation, unclear design, messy commits.
AI can help. In this case, it is similar to when AI supports writing documentation, leveraging AI to index the entire codebase, allowing it to answer and analyze any section of code. You can ask it how to deploy or run a service, and it will return the necessary commands to you without having to fumble around or wait for someone to ask.
However, to achieve this, a more comprehensive solution may be necessary. Besides the codebase, it also needs to index: Requirements, Issues, Changelogs, Etc.
Currently, it is not very convenient; you have to provide some files for context yourself, and you cannot access all the documentation related to the project and the codebase. Recognizing the limitations, GitHub is actively developing GitHub Copilot for enterprise to enhance these capabilities.
XI. AI-Powered Command Line Assistance
Writing command line is probably one of the first areas supported by OpenAI when it was founded. It is not as complex as generating code, simpler and the context is shorter but more necessary, because most terminals do not support autocomplete like IDE.
Most recently, we have GitHub Copilot but it is still simple and the usage is complicated. Warp is a terminal with features like an editor, supports AI very smoothly.
However, when running cli, you should be careful, because if you do not fully understand what the command is running, it can easily lead to unfortunate mistakes. For example, if you ask AI to remove a directory and it returns the following command, it will be troublesome:
rm -rf /
XII. Challenges and Considerations
Even though AI has its limits and conveniences, like any other wave, the use of AI and the development of AI-supporting tools also raise many controversies and concerns.
Reliance on AI
Dependency: The main concern is that developers can become too dependent on AI. This dependency can make developers less skilled at writing code, because developers can rely on AI's suggestions without understanding the basic logic.
Loss of creativity: Reliance on AI to code can limit the creativity of developers. The solutions provided by AI tend to be based on existing patterns, if developers are only satisfied with the results, it can hinder the creation of new solutions.
Security and privacy
Another issue is the issue of security. There are many levels of concern for this, when the entire code base is indexed and sent to the OpenAI server, it is not certain whether it will be used for other purposes or not. With this concern, many companies are moving towards using open source models and building policies for the use of AI.
In addition, AI can be injected. For example, you parse the content of a website, and the website quietly inserts some segments to inject inappropriate context into your prompt, causing AI to provide unwanted results.
When AI explodes, every company wants to share, many companies disregard the privacy of users, and may use user data to train their models. This information can be generated for other users, leading to many privacy concerns.
Outlook on the future
One of the limiting factors of current LLMs is probably compute. These models all require massive computing power to train, fine-tune, and infer. However, with ongoing development and investment, these limits are being pushed every day. This progress encompasses both the increasing size of the models and advancements in fine-tuning techniques and resource optimization.
The development of AI is inevitable, regardless of our preferences. When something new develops, it will replace the old, and perhaps the way we program in 2 years will be very different from the way we are coding now. Adaptation is crucial, but it does not imply losing your programming skills. Instead, keep training your wings and attach the right tools to them to fly faster and higher.
For businesses, maintaining a positive outlook and investing more in AI while concurrently developing new skills for employees is essential. With its recent restructuring, GitHub has fully embraced a focus on AI. This aligns with the trends observed in FANG companies, highlighting that AI will be a critical factor in future competition. Slowing down in this field can result in losing many advantages. The proper and correct application of AI into daily workflows can bring significant efficiency and productivity improvements. Parallel to that is developing policies to ensure security and privacy issues.
Conclusion
In this article, we understand more about Gen AI and a little bit about the concepts of context and parameter. However, its limitations include resource-intensive processes and the presence of specific illusions and bias in the output.
With ongoing development and investment, limitations related to resource usage and output quality are being progressively overcome every day. The future of a natural programming language that can autonomously adapt and correct errors is not far off. AI will mark a transformative shift that will involve everyone and everything.
Written by Duoc Nguyen Van – Senior Back-End Engineer, FireGroup Technology
Embrace the opportunity to be part of our cutting-edge projects and tech-driven journey, join us now at https://firegroup.io/careers/
thien.dev
6 个月Nice article. Thank you for mentioning my app usediagram.com